>
instead of
<
d
:
twice the nodes at level
(d-1)
(each node at level
(d-1)
has two children)
2(d+1) - 1
1 + 2 + 4 + 8 + 16 + ...
is the same as
1 + 10 + 100 + 1000 + 10000 + ... = 11111...
in binary
1
left_child(n) = 2 * n
right_child(n) = left_child(n) + 1 = 2 * n + 1
parent(n) = floor(n / 2)
We can represent a complete binary tree in an array.
A binary heap (or just heap) is a complete binary tree such that the for each node, its priority is higher that the priority of its left and right child (in my universe, higher numbers give lower priority, unless I decide otherwise).
pri(n) ≤ pri(left(n))
pri(n) ≤ pri(right(n))
Implications:
Do not confuse the binary heap and the binary search tree.
The heap is constructued recursively. Each new element is inserted at the to the next available leaf position (which is the end of the array, if implemented using an array). The new element is then sifted or bubbled up by swapping with its parent as long as it is less than its parent
Insertion requires
O(log N)
time, so overall construction of the tree is
(N log N)
time.
The minimum element is the root, so it can be found in O(1) time.
Removing the minimum element destroys the tree structure, so it must be restored. We restore the heap invariant by taking the rightmost leaf on the bottom row (last element in array, if using array implmentation) and moving it to the root position. The heap property is then restored by swaping the node with the smaller of its left and right children, repeating the sift/bubble down procedure until it reaches a leaf position or is less than its children.
Sift down takes O(log N) time.
If we know the tree position of a node, we can raise its priority (or lower its key value) and apply bubble up from its current position.
O(N)
time
O(N2)
O(log N)
time, giving overall
O(N log N)
performance
Removing the minimum element frees up the last element of the array. The minimum elment can be placed in the position vacated by the last element.
O(N)
pass to reverse the elements, or use
>
for comparison instead of
<
The sorting problem is known to have an
O(N log N)
lower bound.
Suppose there existed an algorithm that performed better than
O(log N)
for remove-min.
Such an algorithm would yield a
faster-than-O(N log N)
sorting algorithm. By contradiction, a faster priority queue
cannot exist.