Published on

Behind The Algorithm #5 - 🧘🏻 Asymtotic Notation


Asymptotic Efficiency

When the input size is large enough, sometimes the most impactful or the only impactful part of a function is the order of growth. The multiplicative constants and the lower-order terms will not giving so much impact to the growth of the running time.

Asymptotic efficiency is when we're trying to optimize our algorithm by focusing on the optimization of the order of growth.

In other words, we're trying to optimize our algorithm by focusing to decrease the running time by looking at the limit defined by the notation that defined the worst-case of the function while the input increases without bound. Usually, we can compare algorithms or justify which algorithm is better by looking at their asymptotic efficiency.

Asymptotic Notation

Just to recall, do you remember when we calculated the running time of the insertion sort in the previous article? We know that the insertion sort is having O(n2)O(n^2) running time right? And we also calculated the running time of the merge sort so we know that it's running time is O(nlog2n)O(n \log_2 n).

As you can see that we ignored the lower-order terms. Because sometimes, forcing to include the extra precision is not usually worth the effort for large enough inputs.

To make it more clear, the quadratic running time of the insertion sort f(n)=an2+bc+cf(n) = an^2 + bc + c where a>0a > 0. When nn is large, even just a tiny fraction of the highest-order term would dominate every single lower-order terms. So, wehen we throw away the lower-order terms and ignore all of the constants we get f(n)=O(n2)f(n) = O(n^2).

In other way, we can set the constants c21=a8c_21 = \frac{a}{8}, c2=16a8c_2 = \frac{16a}{8}, and n0=2.max(ba,ca)n_0 = 2 . max(\frac{|b|}{a}, \sqrt{\frac{|c|}{a}}). You may also verify that 0<c1n2an2+bn+cc2n20 < c_1n^2 \leq an^2 + bn + c \leq c_2n^2 for all nn0n \geq n_0.

We abstracted the running time of the insertion sort which is an2+bn+can^2 + bn + c by writing it as O(n2)O(n^2). That's how we characterize the worst-case running time of insertion sort.

Asymptotic Upper Bound or the Big O notation

The worst-case running time of the insertion sort is O(n2)O(n^2). Let see what's the meaning of this notation by looking at this graph first.


We use the Big O notation to give as the upper bound on a function to within a constant factor. As you can see in the image above that the function f(n)f(n) is always bellow the cg(n)cg(n) which is the upper bound of the running time of the function for all values of nn is bigger (to the right) or equal to n0n_0.

Let's just write the function as f(n)=O(g(n))f(n) = O(g(n)). It means that the f(n)f(n) is a member of set O(g(n))O(g(n)). In other words, in our insertion sort case, an2+bn+c=O(g(n))an^2 + bn +c = O(g(n)), where a>0a > 0.