Big O notation
Using The Master Theorem To Find the Big O Of Recursive Algorithms
For some persons finding the Big O of recursive algorithms can prove to be difficult especially if the algorithms are complexed. The Master Theorem provides a relatively simple way to solve recurrence relations as long as the recurrence fits the theorem constraints.
Basic definition:
T(n) = aT(n/b) + f(n^{c}) where a >= 1, b > 1, and c >= 1
T(n) = Θ(n^{c}) if c >= log_{b}a
T(n) = Θ(n^{c} log n) if c = log_{b}a
T(n) = Θ(n^{logba}) if c < log_{b}a
We can think of “b” as the number of branches and “a” as the number of recurrences done; lets take a look at the example.
If we have a function:
T(n) = 2T(n/1) + f(n^{0}) : a = 1, b = 2, c = 0
log_{2}1 = 0 therefore c = log_{b}a which satisfies Θ(n^{c}log n) = O(log n)
References:
anupcowkur.com
wikipedia.com
Big O (Omega) Analysis Explained
Recently I had a technical interview with a major IT company, during the course of that interview I was repeatedly quizzed on the runtime complexity of various algorithms (hashing, sorting, searching), how I could incorporate them into the solution of a given problem and what the resultant runtime complexity of that solution would be . Very quickly in the interview I realized that the cursory introduction to Big O notation that was taught in my algorithm analysis class was very introductory to say the least. That interview was an eye opener as to what I need to know in preparation for future interviews. After doing some research, I could not find any simple well defined implementation of Big O that was relatively easy to grasp; therefore, I decided to put together the major parts of the information I had gone through into a simple and easy to follow explanation.
Definition: In computer science, big O notation is used to classify algorithms by how they respond (i.e., in their processing time or working space requirements) to changes in input size. Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.
Infinite asymptotics: Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n^{2} − 2n + 2. As n grows large, the n^{2} term will come to dominate, so that all other terms can be neglected — for instance when n = 500, the term 4n^{2} is 1000 times as large as the 2n term. Ignoring the latter would have negligible effect on the expression’s value for most purposes. Further, the coefficients become irrelevant if we compare to any other orderof expression, such as an expression containing a term n^{3} or n^{4}. Even if T(n) = 1,000,000n^{2}, if U(n) = n^{3}, the latter will always exceed the former once n grows larger than 1,000,000 (T(1,000,000) = 1,000,000^{3}= U(1,000,000)). Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either or
Orders of Growth for Common Functions: Functions are ordered by scalability and efficiency from best to worst.
Notation | Name | Example |
---|---|---|
constant | Determining if a number is even or odd; using a constant-size lookup table or hash table | |
double logarithmic | Finding an item using interpolation search in a sorted array of uniformly distributed values. | |
logarithmic | Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap. | |
fractional power | Searching in a kd-tree | |
linear | Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two n-bit integers by ripple carry. | |
linearithmic, loglinear, or quasilinear | Performing a Fast Fourier transform; heapsort, quicksort (best and average case), or merge sort | |
quadratic | Multiplying two n-digit numbers by a simple algorithm; bubble sort (worst case or naive implementation), Shell sort, quicksort (worst case), selection sort or insertion sort | |
polynomial or algebraic | Tree-adjoining grammar parsing; maximum matching for bipartite graphs | |
L-notation or sub-exponential | Factoring a number using the quadratic sieve or number field sieve | |
exponential | Finding the (exact) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search | |
factorial | Solving the traveling salesman problem via brute-force search; generating all unrestricted permutations of a poset; finding the determinant with expansion by minors. |
An O(1) scales better than an O(log N) algorithm,
which scales better than an O(N) algorithm,
which scales better than an O(N log N) algorithm,
which scales better than an O(N^2) algorithm,
which scales better than an O(2^N) algorithm.
NB: O(log n) = O(log2 n) | log2 n = log(n) / log(2)
That said, lets look at how we can use Big O notation to describe the runtime complexity of some code.
int SomeFunction(int n) { int p = 1; int i = 1; while (i < n) { int j = 1; while (j < i) { p = p * j; j = j + 1; } i = i + 1; } return p; }
The first thing we will do is to assign constants to each statement in the function, these constants will be used to represent the amount of time it takes to run each statement.
int SomeFunction(int n) { int p = 1; ----------------------->c1 int i = 1; ----------------------->c1 while (i < n) { ------------------>c2 int j = 1; ------------------->c1 while (j < i) { -------------->c2 p = p * j; ---------------->(c1 + c3) j = j + 1; -------------->(c1 + c4) } i = i + 1; ------------------>(c1 + c4) } return p; ------------------------>c5 }
We then multiply each constant by the number of times each statement is run.
int SomeFunction(int n) { int p = 1; ----------------------->c1 x 1 int i = 1; ----------------------->c1 x 1 while (i < n) { ------------------>c2 x n int j = 1; ------------------->c1 x (n - 1) while (j < i) { -------------->c2 x ((1/2)n^2 - (3/2)n + 2) p = p * j; ---------------->(c1 + c3) x ((1/2)n^2 - (3/2)n + 1) j = j + 1; -------------->(c1 + c4) x ((1/2)n^2 - (3/2)n + 1) } i = i + 1; ------------------>(c1 + c4) x (n - 1) } return p; ------------------------>c5 x 1 }
You may be wondering how ((1/2)n^2 – (3/2)n + 1) was derived. Well if you look closely at the code you will notice that it contains a nested dependent loop. Further analysis reveals that code inside the outer loop is run (n-1) times while code inside the inner loop is run 0, 1, 2, 3 … (n-2) times which is an arithmetic progression.
The sum of this progression would be (n-2)(n-1) to convert it to its standard form we have to divide it by 2 hence (n-2)(n-1)/2 = ((1/2) n^2 – (3/2) n + 1). Multiplying the run times by the number of times each statement is run will give the total time the function takes to run -> (c1 + (1/2)c2 + (1/2)c3 + (1/2)c4)n^2 + (-c1 – (1/2)c2 – (1/2)c3 – (1/2)c4)n + (2(c1) + 2(c2) +c3 + c5) ignoring the constants, lower order n terms and the coefficient of n^2 we are left with n^2 or O(n^2).
Related articles
- Mathematically analyzing sorting algorithms (peter-eigenschink.at)