Big O (Omega) Analysis Explained

Posted on Updated on

Recently I had a technical interview with a major IT company,  during the course of that interview I was repeatedly quizzed on the runtime complexity of various algorithms (hashing, sorting, searching), how I could incorporate them into the solution of a given problem and what the resultant runtime complexity of that solution would be . Very quickly in the interview I realized that the cursory introduction to Big O notation that was taught in my algorithm analysis class was very introductory to say the least. That interview was an eye opener as to what I need to know in preparation for future interviews. After doing some research, I could not find any simple well defined implementation of Big O that was relatively easy to grasp; therefore, I decided to put together the major parts of the information I had gone through into a simple and easy to follow explanation.

Definition: In computer science, big O notation is used to classify algorithms by how they respond (i.e., in their processing time or working space requirements) to changes in input size. Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.

Infinite asymptotics: Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n2 − 2n + 2. As n grows large, the n2 term will come to dominate, so that all other terms can be neglected — for instance when n = 500, the term 4n2 is 1000 times as large as the 2n term. Ignoring the latter would have negligible effect on the expression’s value for most purposes. Further, the coefficients become irrelevant if we compare to any other orderof expression, such as an expression containing a term n3 or n4. Even if T(n) = 1,000,000n2, if U(n) = n3, the latter will always exceed the former once n grows larger than 1,000,000 (T(1,000,000) = 1,000,0003U(1,000,000)). Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either \ T(n)= O(n^2)       or      T(n)\in O(n^2)

Orders of Growth for Common Functions: Functions are ordered by scalability and efficiency from best to worst.

Notation Name Example
O(1)\, constant Determining if a number is even or odd; using a constant-size lookup table or hash table
O(\log \log n)\, double logarithmic Finding an item using interpolation search in a sorted array of uniformly distributed values.
O(\log n)\, logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap.
O(n^c),\;0<c<1 fractional power Searching in a kd-tree
O(n)\, linear Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two n-bit integers by ripple carry.
O(n\log n)=O(\log n!)\, linearithmic, loglinear, or quasilinear Performing a Fast Fourier transformheapsortquicksort (best and average case), or merge sort
O(n^2)\, quadratic Multiplying two n-digit numbers by a simple algorithm; bubble sort (worst case or naive implementation), Shell sort, quicksort (worst case), selection sort or insertion sort
O(n^c),\;c>1 polynomial or algebraic Tree-adjoining grammar parsing; maximum matching for bipartite graphs
L_n[\alpha,c],\;0 < \alpha < 1=\, e^{(c+o(1)) (\ln n)^\alpha (\ln \ln n)^{1-\alpha}} L-notation or sub-exponential Factoring a number using the quadratic sieve or number field sieve
O(c^n),\;c>1 exponential Finding the (exact) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search
O(n!)\, factorial Solving the traveling salesman problem via brute-force search; generating all unrestricted permutations of a poset; finding the determinant with expansion by minors.

An O(1) scales better than an O(log N) algorithm,
which scales better than an O(N) algorithm,
which scales better than an O(N log N) algorithm,
which scales better than an O(N^2) algorithm,
which scales better than an O(2^N) algorithm.

NB: O(log n) = O(log2 n) | log2 n = log(n) / log(2)

That said, lets look at how we can use Big O notation to describe the runtime complexity of some code.

int SomeFunction(int n) {
    int p = 1;
    int i = 1;
    while (i < n) {
        int j = 1;
        while (j < i) {
             p = p * j;
             j = j + 1;
        i = i + 1;
   return p;

The first thing we will do is to assign constants to each statement in the function, these constants will be used to represent the amount of time it takes to run each statement.

    int SomeFunction(int n) {
    int p = 1; ----------------------->c1
    int i = 1; ----------------------->c1
    while (i < n) { ------------------>c2
        int j = 1; ------------------->c1
        while (j < i) { -------------->c2
            p = p * j; ---------------->(c1 + c3)
            j = j + 1; -------------->(c1 + c4)
        i = i + 1; ------------------>(c1 + c4)
    return p; ------------------------>c5

We then multiply each constant by the number of times each statement is run.

 int SomeFunction(int n) {
    int p = 1; ----------------------->c1 x 1
    int i = 1; ----------------------->c1 x 1
    while (i < n) { ------------------>c2 x n
        int j = 1; ------------------->c1 x (n - 1)
        while (j < i) { -------------->c2 x ((1/2)n^2 - (3/2)n + 2)
            p = p * j; ---------------->(c1 + c3) x ((1/2)n^2 - (3/2)n + 1)
            j = j + 1; -------------->(c1 + c4) x ((1/2)n^2 - (3/2)n + 1)
        i = i + 1; ------------------>(c1 + c4) x (n - 1)
    return p; ------------------------>c5 x 1

You may be wondering how ((1/2)n^2 – (3/2)n + 1) was derived. Well if you look closely at the code you will notice that it contains a nested dependent loop. Further analysis reveals that code inside the outer loop is run (n-1) times while code inside the inner loop is run 0, 1, 2, 3 … (n-2) times which is an arithmetic progression.

The sum of this progression would be (n-2)(n-1) to convert it to its standard form we have to divide it by 2 hence (n-2)(n-1)/2 = ((1/2) n^2 – (3/2) n + 1). Multiplying the run times by the number of times each statement is run will give the total time the function takes to run -> (c1 + (1/2)c2 + (1/2)c3 + (1/2)c4)n^2 + (-c1 – (1/2)c2 – (1/2)c3 – (1/2)c4)n + (2(c1) + 2(c2) +c3 + c5) ignoring the constants, lower order n terms and the coefficient of n^2 we are left with n^2 or O(n^2).

4 thoughts on “Big O (Omega) Analysis Explained

    Tester said:
    August 23, 2012 at 5:02 pm

    thank you!

    Swiss said:
    October 19, 2012 at 12:28 pm

    Nice explanation, thanks

    Romaine Carter said:
    October 19, 2012 at 9:41 pm

    Thanks swiss.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s