Recently I had a technical interview with a major IT company, during the course of that interview I was repeatedly quizzed on the runtime complexity of various algorithms (hashing, sorting, searching), how I could incorporate them into the solution of a given problem and what the resultant runtime complexity of that solution would be . Very quickly in the interview I realized that the cursory introduction to Big O notation that was taught in my algorithm analysis class was very introductory to say the least. That interview was an eye opener as to what I need to know in preparation for future interviews. After doing some research, I could not find any simple well defined implementation of Big O that was relatively easy to grasp; therefore, I decided to put together the major parts of the information I had gone through into a simple and easy to follow explanation.

**Definition:** In computer science, big O notation is used to classify algorithms by how they respond (*i.e.,* in their processing time or working space requirements) to changes in input size. Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols *o*, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.

**Infinite asymptotics:** Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size *n* might be found to be *T*(*n*) = 4*n*^{2} − 2*n* + 2. As *n* grows large, the *n*^{2} term will come to dominate, so that all other terms can be neglected — for instance when *n* = 500, the term 4*n*^{2} is 1000 times as large as the 2*n* term. Ignoring the latter would have negligible effect on the expression’s value for most purposes. Further, the coefficients become irrelevant if we compare to any other orderof expression, such as an expression containing a term n^{3} or n^{4}. Even if *T*(*n*) = 1,000,000*n*^{2}, if *U*(*n*) = *n*^{3}, the latter will always exceed the former once *n* grows larger than 1,000,000 (*T*(1,000,000) = 1,000,000^{3}= *U*(1,000,000)). Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either or

**Orders of Growth for Common Functions:** Functions are ordered by scalability and efficiency from best to worst.

An O(1) scales better than an O(log N) algorithm,

which scales better than an O(N) algorithm,

which scales better than an O(N log N) algorithm,

which scales better than an O(N^2) algorithm,

which scales better than an O(2^N) algorithm.

**NB:** O(log n) = O(log2 n) | log2 n = log(n) / log(2)

That said, lets look at how we can use Big O notation to describe the runtime complexity of some code.

int SomeFunction(int n) {
int p = 1;
int i = 1;
while (i < n) {
int j = 1;
while (j < i) {
p = p * j;
j = j + 1;
}
i = i + 1;
}
return p;
}

The first thing we will do is to assign constants to each statement in the function, these constants will be used to represent the amount of time it takes to run each statement.

int SomeFunction(int n) {
int p = 1; ----------------------->c1
int i = 1; ----------------------->c1
while (i < n) { ------------------>c2
int j = 1; ------------------->c1
while (j < i) { -------------->c2
p = p * j; ---------------->(c1 + c3)
j = j + 1; -------------->(c1 + c4)
}
i = i + 1; ------------------>(c1 + c4)
}
return p; ------------------------>c5
}

We then multiply each constant by the number of times each statement is run.

int SomeFunction(int n) {
int p = 1; ----------------------->c1 x 1
int i = 1; ----------------------->c1 x 1
while (i < n) { ------------------>c2 x n
int j = 1; ------------------->c1 x (n - 1)
while (j < i) { -------------->c2 x ((1/2)n^2 - (3/2)n + 2)
p = p * j; ---------------->(c1 + c3) x ((1/2)n^2 - (3/2)n + 1)
j = j + 1; -------------->(c1 + c4) x ((1/2)n^2 - (3/2)n + 1)
}
i = i + 1; ------------------>(c1 + c4) x (n - 1)
}
return p; ------------------------>c5 x 1
}

You may be wondering how ((1/2)n^2 – (3/2)n + 1) was derived. Well if you look closely at the code you will notice that it contains a nested dependent loop. Further analysis reveals that code inside the outer loop is run (n-1) times while code inside the inner loop is run 0, 1, 2, 3 … (n-2) times which is an arithmetic progression.

The sum of this progression would be (n-2)(n-1) to convert it to its standard form we have to divide it by 2 hence (n-2)(n-1)/2 = ((1/2) n^2 – (3/2) n + 1). Multiplying the run times by the number of times each statement is run will give the total time the function takes to run -> (c1 + (1/2)c2 + (1/2)c3 + (1/2)c4)n^2 + (-c1 – (1/2)c2 – (1/2)c3 – (1/2)c4)n + (2(c1) + 2(c2) +c3 + c5) ignoring the constants, lower order n terms and the coefficient of n^2 we are left with n^2 or O(n^2).