Coding and Foo

/fuː/ – A metasyntactic variable used to represent an unspecified entity


Fibonacci Heaps vs. Binary Heaps


Today I would like to talk a bit about an interesting heap data structure: the Fibonacci heap. They are interesting because they offer θ(1) runtime for insert, decrease-key (amortized) and merge-heap operations as well as O(log n) amortized time for extract-min.

Time complexity of heap operations:

Lets take a quick look of how the time complexity of the operations on a regular Binary heap and a Fibonacci heap compare.

Operation Binary heap Fibonacci heap
insert θ(log n) θ(1)
extract-min θ(log n) O(log n)
get-min θ(1) θ(1)
decrease-key θ(log n) θ(1)
merge-heap θ(n) θ(1)
Big-O and Big-θ Notation:

O(f(n)) means that the running time of an operation has an upper bound of c*f(n) as n grows larger where c is some constant. θ(f(n)) means that the running time not only has c*f(n) as an upper bound but that it also has a lower bound of c*f(n). That is to say that the operation always runs in c*f(n) time and cannot run faster than that.

Click here to learn more about the O-Notation definitions and Donald Knuth’s Big-θ (Theta) Notation which I am using in this article.

So on paper Fibonacci heaps look like the better choice. But in reality binary heaps are used much more, why is that?

Why binary heaps are MUCH more POPULAR:

Well the first reason is, that binary heaps are just much simpler to implement and they can be implemented on top of an array which is usually as fast as it gets when for reading data from memory and writing data to memory.

Because of this binary heaps almost always outperform Fibonacci heaps in real world scenarios. It is only when the datasets become really large that Fibonacci-heaps can gain an advantage.

The second reason for why binary heaps are much more popular is that every operation has the same running time each time it is executed. The running times in Fibonacci heaps however are only amortized running costs. That means that the time it takes to run an operation like extract-min is only O(log n) on average but not during every call. In fact it can be that extract-min takes only O(1) time but it can also happen that it takes O(n) time depending on the current state of the heap.

This makes Fibonacci heaps much less desirable for most applications. If the execution of an operation suddenly takes much longer than it does on average other threads/processes could suddenly find themselves in a blocked state. Which can result in all kinds of undesirable side effects (lost responsiveness, buffers spilling over, etc.).

The next table shows the amortized vs. the worst case running time of operations on a Fibonacci heap.

Operation Fibonacci heap (amortized) Fibonacci heap (worst case)
insert θ(1) θ(1)
extract-min O(log n) O(n)
get-min θ(1) θ(1)
decrease-key θ(1) O(n)
merge-heap θ(1) θ(n)

Binary heaps don’t have these problems. The running time of an operation is virtually identical each time it is called.


In the next article, I am going to implement both heap structures and put them though some side to side benchmarks. Maybe we can demonstrate a scenario where a Fibonacci heap actually outperforms a Binary heap.


Coding Quizzes For The Fun Of It


Challenges are one of the best ways to learn new stuff and further engrave your understanding of a subject. Also quizzes and challenges are something I am easily obsessed with.

So it happened that I found myself looking for sql quizzes in the App store and discovered a nice series of Apps from

Their stuff is completely free and each course is available as an Android App, iOS App but also for Windows phones and as a browser based version on their website.

Some of their courses are:

  • C++ -343 Questions
  • Java – 152 Questions
  • JavaScript – 139 Questions
  • Python 3 – 276 Questions
  • PHP – 100 Questions
  • C# -199 Questions
  • Swift – 168 Questions
  • HTML (including HTML5) – 119 Questions
  • CSS (including CSS3) – 175 Questions
  • SQL – 104 Questions

The questions start out really easy and if you already know most of the material you will be able finish a course in less than two hours. However, that is not to say it is not worthwhile. On the contrary, I think these apps are great if you need a quick refresher of the language or if you are just starting to learn programming with any of these languages.

The explanations are good and often give insight to the inner workings of a language or a platform. And if you need extra guidance you can watch their demonstration videos for better understanding instead of only reading through it like I did. You can even run the code on the app to test out how it all works if you want. Whats not to love?

If you wonder how obsessed I got with these quizzes over the last weekend, checkout my profile on


Pandigital Products – Project Euler Problem No. 32


Hi there!

So today, I thought I’d post a solution for Project Euler’s Pandigital Products Problem (No. 32).

In case you don’t know what Project Euler ist, its basically a website with mathematical challenges that can/should be solved by writing code in order to compute the solution. Doing these challenges is a fun way of combining mathematical thinking and coding. I recommend checking out project Euler here.

Many if not most people who work on these problems try to find the computationally most efficient solution. However I believe that there also lies some virtue in trying to find a feasible solution as quickly as possible. Thats what I am going to attempt to do here.

The Problem

Lets take a look at the original problem definition from the Project Euler website:

To paraphrase: We are asked to find all numbers a, b and c such that a*b = c and that the concatenation of a, b and c contains each digit (1 to 9) exactly once. The solution is then the sum of all unique values of c that we can find.

A (simple) Solution using Python:

From the problem it is clear that the concatenation of a, b and c must be 9 characters long. How about if we compute all permutations of the sequence of all digits “123456789”?


And now just check if there is a way to split each permutation into three parts such that it represents a concatenation of a, b and c with the desired property that a*b = c.


Thats nice, we got all possible combinations of a,b and c such that a*b=c and that the concatenation of a, b and c is pandigital. Notice how some values of c occur multiple times in the output, i.e. 12*483 = 5796 and also 138*42 = 5796.

The next step is to sum up all different values of c, in order to find the solution to the original problem. I am using a python set to collect all products, because sets filter out duplicate entries automatically. After going through the iterations the values in the set just need to be summed up to give us the correct answer.


Performance improvements:

The above approach is totally brute force but the idea is simple and it was fast to code. With a little bit of tweaking it can be made just a bit more efficient.

Looking at the equation a * b = c it becomes clear that:

  1. A factor (a or b) cannot have more digits than the product (c) has.
    Thus we can limit the range of i for the possible positions of the first cut:
  2. For the same reason we can also limit the range of j for the position of the second cut:

Last, we can exploit the ordering of the generated permutations that we saw in the first output. The permutation of “123456789” builds up gradually starting at the end of the string. And when we loop over all permutations and encounter the first permutation starting with the digit “5” we know that the product (c) went trough all possible values and we won’t find any new ones. This allows us to skip half of the generated permutations.

Final Solution:

Final Output:

Find more of my Project Euler Solutions on GitHub.

Material taken from is licensed under Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales. More info here.


Hello world!


Well, lets start a blog today.


Powered by WordPress & Theme by Anders Norén