Today I would like to talk a bit about an interesting heap data structure: the Fibonacci heap. They are interesting because they offer θ(1) runtime for insert, decrease-key (amortized) and merge-heap operations as well as O(log n) amortized time for extract-min.

###### Time complexity of heap operations:

Lets take a quick look of how the run time complexity of the operations on a regular Binary heap and a Fibonacci heap compare.

Operation |
Binary heap |
Fibonacci heap |

insert | θ(log n) | θ(1) |

extract-min | θ(log n) | O(log n) |

get-min | θ(1) | θ(1) |

decrease-key | θ(log n) | θ(1) |

merge-heap | θ(n) | θ(1) |

###### Big-O and Big-θ Notation REFRESHER:

A runtime of O(f(n)) means that the running time of the operation has an upper bound of c*f(n) as n grows larger where c is some constant.

θ(f(n)) means that the running time not only has c*f(n) as an upper bound but that it also has a lower bound of c*f(n). That is to say that the operation always runs in c*f(n) time and cannot run faster nor slower than that.

Click here to learn more about the O-Notation definitions and Donald Knuth’s Big-θ (Theta) Notation which I am using in this article.

So on paper Fibonacci heaps look like the better choice. But in reality binary heaps are used much more, why is that?

###### Why binary heaps are MUCH more POPULAR:

Well the first reason is, that binary heaps are just much simpler to implement and they can be implemented on top of an array which is usually as fast as it gets when for reading data from memory and writing data to memory.

Because of this binary heaps almost always outperform Fibonacci heaps in real world scenarios. It is only when the datasets become really large that Fibonacci-heaps can gain an advantage.

The second reason for why binary heaps are much more popular is that every operation has the same running time each time it is executed. The running times in Fibonacci heaps however are only amortized running costs. That means that the time it takes to run an operation like extract-min is only O(log n) on average but not during every call. In fact it can be that extract-min takes only O(1) time but it can also happen that it takes O(n) time depending on the current state of the heap.

This makes Fibonacci heaps much less desirable for most applications. If the execution of an operation suddenly takes much longer than it does on average other threads/processes could suddenly find themselves in a blocked state while they are waiting for the heap operation to finish. Which can result in all kinds of undesirable side effects (lost responsiveness, buffers spilling over, etc.).

The next table shows the amortized vs. the worst case running time of operations on a Fibonacci heap.

Operation |
Fibonacci heap (amortized) |
Fibonacci heap (worst case) |

insert | θ(1) | θ(1) |

extract-min | O(log n) | O(n) |

get-min | θ(1) | θ(1) |

decrease-key | θ(1) | O(n) |

merge-heap | θ(1) | θ(n) |

Binary heaps don’t have these variations in runtime. The runtime of an operation in a binary heap is virtually identical each time the operation is called.

###### LETS DO A BENCHMARK:

In the next article, I am going to implement both heap structures and put them though some side to side benchmarks. Maybe we can demonstrate a scenario where a Fibonacci heap actually outperforms a Binary heap.