What, Why and How of Big O Notation : Part II

In my last post What Why of Big Oh Notation, I covered FAQ about the Big-Oh. In this post, you'll know more general concepts about the Big-Oh. Logarithms always scared non-mathematics students. So, let me start with logarithmatic.
If log2 32 = 5 that means 25 = 32
So, the logarithm of a number to a given base is the exponent to which the base must be raised in order to produce that number. So, I hope, you wont scare with log word now.

Why base of logarithm is ignore in computing Big-Oh?
As we know from the Change of Base Theorem, for any a, b > 0
logb n = loga n / loga b
loga n = C logb n
Here, C is a constant equal to loga b.
Since functions that differ only by a constant factor have the same order of growth, O(log2 n) is the same as O(log n). Therefore, in Big-Oh logarithmic growth, the base of the logarithm is not important, and you can simply say O(log n).

O(n!):
O(n!) that represents the factorial growth, is nothing to do with the actual calculation of numbers' factorial. Now, to make it more clear lets see the following example.

factorial(n) {
      if (num == 1) return 1
      return (num * factorial(num - 1))
}

Above, psuedo code returns the factorial of given number. Lets see what it returns for 5
5*(5-1)
5*(4)*(4-1)
5*(4)*(3)*(3-1)
5*(4)*(3)*(2)*(2-1)

It clearly suggests that it has a linear growth rate. That means it is O(n).

The factorial grows massively with n! growth rate. The traveling salesman problem is perfect example for O(n!).

You like this article? If you do, please do share it. And if you don't, then let me know how I can improve it.