#Fibonacci heaps explained
Explore tagged Tumblr posts
c-official · 1 month ago
Text
How can i say no to a :3 request. Basically a Fibonacci heap is a super Priority queue. It supports the following operations
Insert: Insert a number in O(1) time.
GetMin: Get the lowest number in he heap in O(1) time.
DeleteMin: Remove the lowest number in the heap in amortized O(log(n)) time. Here n refers to the amount of values in the heap.
DecreaseKey: Decrease one of the values in the heap in amortized O(1) time.
Merge: Merge two Fibonacci heaps in O(1) time.
Now for the implementation. The heap is stored as a collection of rooted trees such that child notes hold larger values than their parent, and the heap also keeps track of which root has the lowest value.
Insert: To insert a value simply add a new tree of one element and update the pointer to the lowest root if necessary. This is clearly constant time.
GetMin: We already have a pointer to the root with the lowest value so just return that value. This is clearly also constant time.
Merge: Simply merge the collections of roots from the two heaps and set the pointer to the lowest root to the lowest of the two lowest roots. This can be done in constant time if the collections of roots are stored as circular doubly linked lists.
DeleteMin: First remove the lowest root and add its children as new roots. Now this is where things get interesting. Establish a hashmap. Now scan through the roots and put them in the hashmap by their degree(aka the amount of children they have). If their is already a node with the same degree merge that node with the current one by adding the one with the highest value as a child of the other. By the end of this process the highest degree of root node should be higher or equal than the amount of roots. We will write this important fact as #roots <= #maxDegree. While this scan is done find the new lowest value root. This clearly runs in O(#roots) time. This is not O(log n) as i promised but we will return to this.
DecreaseKey: Decrease the value of the desired key. If the value is still larger than the parents value all is well. Otherwise remove the node from the parent and add is as a new root. Mark the parent. If the parent is already marked do the same procedure of removing the parent, adding it as a new root at mark its parent. Do not mark the parent if it is a root and remove the mark when making a node a new root. This way a root will never be marked.
The whole marked thing sounds complicated but the only thing it does is keeping the restriction that no non root node can have removed more that one child. This is again not constant time and can run in O(#maxDepth) if the nodes parent is marked, the parent's parent is marked and so on up to a root.
This will be a good place to explain what amortized running time is. The point is that we will not consider the worst case, but the worst case average of a series of instructions. So lets keep track of some potential work. Now when you make a DecreaseKey add a token that represents some constant time work you could do now to a pile. As we mark at most one node per operation we know that #potentialWorkDone >= #markedNodes. Now that we know this, each time we move a marked node to be a new root instead of counting the time that took discard one token of potential work. We still unmark the node so the previous inequality still holds. This we will always have a token of potential work when moving a marked node to be a new node. So on each operation we did some constant time work and some constant time potential work. This is amortized O(1).
Now it is just the DeleteMin runtime that needs to be explained and why fibonacci numbers are relevant.
Instaed of looking at #roots. Lets write #roots = #coreRoots+#extraRoots where a core root is a root left after a DeleteMin operation. Thus it is always true that #coreRoots<=#maxDegree. Now we can do the same trick. Each time we add a new root from Insert or DecreaseKey add a token of potential work and each time we handle an extra root in the DeleteMin operation remove a token of potential work. Thus after each DeleteMin operation no tokens of potential work and no extra roots are left. Now DeleteMin runs in amortized O(#coreRoots)=O(#maxDegree) time.
So why is #maxDegre=O(log(n)). The main claim is that given a tree with a root of degree d then the amount of nodes in that tree is at least F_d the d'th Fibonacci number.
To prove this the Fibonacci numbers are screaming to do it inductively. So for a degree 0 tree the minimum amount of nodes is clearly the one node. A degree 1 tree has per definition a child and thus has at least 2 nodes. Now assume that the minimum tree with a root of degree m has at least F_m nodes for all m less than some n. Now look at a tree where the root has degree n. The newest node got added to the tree when it had at least n-1 nodes and because we only merge trees of the same root degree the newest root must have degree at least n-2 because it could have lost a child. Thus the newest child's subtree has at leas size F_(n-2) and before the newest child was added the tree must have had at least F_(n-1) nodes. Thus the tree now has at least size F_(n-2)+F_(n-1)=F_n proving the claim.
But because the Fibonacci numbers grow exponentially we get that #maxDegre=O(log(n)). Pure magic!
Side note, this can not get any quicker. If it could we could sort a list by adding all of our values to our data structure and then running GetMin and DeleteMin until it is empty which would result in less that O(n log n) operations if DeleteMin was quicker than O(log n) and the other operations where constant time.
A better explanation with animations can be found here. And also general shout out to that video for introducing me to the subject.
I just discovered fibonachi heaps! That is some top tier stuff and has left me craving my next big hit. So please, i am in dire need off beutifull algorithms and datastructures.
41 notes · View notes