In mathematics and computer science, an algorithm is a step-by-step procedure for calculation. Algorithms are used for calculation, data processing, and automated reasoning. - Wikipedia
Average-case and best-case are very subjective and difficult to calculate. The meaning of best and average depends on so many factors. So its better to avoid all such complexities associated with best and average case; and consider only worst-case.
The Big Oh notation enables us to ignore details and focus on the big picture. So it doesn't care if one sorting algorithm sorts 3 times faster than the other. It's more about which algorithm sorts faster when your input size is like 10,000 items. When we say "the running time is O(n^2)," we mean that there is a function f(n) that is O(n^2) such that for any value of n, no matter what particular input of size n is chosen, the running time on that input is bounded from above by the value of f(n). Equivalently, we mean that the worst-case running time is O(n^2). Big oh is used to describe the asymptotic behavior of an algorithm, that is rate of growth. It is also an upper bound which might not be tight.
Algorithms are independent of programming language as well
as platform on which it run. Algorithm analysis is mainly about the resources like computational time, storage, computer hardware, bandwidth etc. But usually measuring computational time is most important.
Computational time or run time of algorithm is computed by counting up the number of steps an algorithm takes on a given problem instance. But defining a unified model to calculate run time of an algorithm is quite complicated.
Let's take a case of multiplying two numbers vs adding two numbers. Multiplying two numbers takes more time than adding two numbers on most processors. So addition and multiplication will not take same number of steps. Let's make things more simpler by analyzing addition of two numbers on two modern languages like C and Java. Even in this case addition of two numbers will not take exactly same number of steps. So this proves exact analysis of algorithms is going to be much more complicated as it will depend on factors like processors, languages, different versions of the same language and even on how smart the algorithm developer is etc.
To make things much more easier and reliable; RAM Model of Computation plays a vital role. These assumptions make analysis of algorithm easier without missing important aspects. Some of the basic assumptions of RAM model are:
So with this model, adding and multiplying two numbers is going to take same unit of time. And even adding two numbers will have same complexity on any platform and language.
Computational time or run time of algorithm is computed by counting up the number of steps an algorithm takes on a given problem instance. But defining a unified model to calculate run time of an algorithm is quite complicated.
Let's take a case of multiplying two numbers vs adding two numbers. Multiplying two numbers takes more time than adding two numbers on most processors. So addition and multiplication will not take same number of steps. Let's make things more simpler by analyzing addition of two numbers on two modern languages like C and Java. Even in this case addition of two numbers will not take exactly same number of steps. So this proves exact analysis of algorithms is going to be much more complicated as it will depend on factors like processors, languages, different versions of the same language and even on how smart the algorithm developer is etc.
To make things much more easier and reliable; RAM Model of Computation plays a vital role. These assumptions make analysis of algorithm easier without missing important aspects. Some of the basic assumptions of RAM model are:
- Mathematical and logical operations like +,-,*,=, floor, ceiling, if etc take one time stamp.
- Number of time a loop runs depends on the number of iterations. So Loop is not considered a simple operation and its composition of many single-step operations.
- Memory access like load,store,copy of an item requires one unit of time (independent of the source like disk or cache)
So with this model, adding and multiplying two numbers is going to take same unit of time. And even adding two numbers will have same complexity on any platform and language.
Complexity
Using RAM model, number of steps an algorithm takes can be counted by executing it. But complexity also depends on the nature/variation of input data set. Sorting an already sorted integer array will take far lesser number of steps compared with a unsorted array. So you need to provide all possible data set to find out the run time complexity. This brings in the notion of best, worst, and average-case complexity of algorithms.
- The worst-case complexity : maximum number of steps taken in any instance of size n.
- The best-case complexity : minimum number of steps taken in any instance of size n.
- The average-case complexity : average number of steps taken over all instances of size n.
Average-case and best-case are very subjective and difficult to calculate. The meaning of best and average depends on so many factors. So its better to avoid all such complexities associated with best and average case; and consider only worst-case.
( O ) Big-Oh Notation
f(n) = 53n^2 + 654n - logn
Performing precise worst-case analysis on above function will be a daunting task. But above equation tells that "time grows quadratically with n". So it is much easier to talk in terms of upper and lower bounds of time-complexity. Big Oh simplifies our analysis by ignoring levels of details that do not impact our comparison of algorithm. So Big Oh analysis of below two functions are same:
f(n) = 5n;
and g(n) = n;
So constant factors like 5 in above case are ignored when comparing two algorithms. Let's take a look at the formal definition of Big Oh notation :
- f(n) = O(g(n)) means c.g(n) is an upper bound on f(n). Thus there exists some constants c such that f(n) is always <= c.g(n) for large n {Big Oh Notation}
- f(n) = Ω(g(n)) means c.g(n) is a lower bound of f(n). Thus there exists some constants c such that f(n) is always >= c.g(n) {Omega Notation}
- f(n) = Ɵ(g(n)) means c1.g(n) is an upper bound on f(n) and c2.g(n) is a lower bound on f(n). Thus there exists constants c1 and c2 such that f(n) <= c1.g(n) and f(n) >= c2.g(n) {Theta Notation}
The Big Oh notation enables us to ignore details and focus on the big picture. So it doesn't care if one sorting algorithm sorts 3 times faster than the other. It's more about which algorithm sorts faster when your input size is like 10,000 items. When we say "the running time is O(n^2)," we mean that there is a function f(n) that is O(n^2) such that for any value of n, no matter what particular input of size n is chosen, the running time on that input is bounded from above by the value of f(n). Equivalently, we mean that the worst-case running time is O(n^2). Big oh is used to describe the asymptotic behavior of an algorithm, that is rate of growth. It is also an upper bound which might not be tight.
Compare Efficiency of Algorithms : Big Oh notation & worst-case analysis.
Big Oh Big picture
- Constant functions
Multiplying two numbers, printing "Hello World", adding/averaging two numbers,
accessing an array element with index.
accessing an array element with index.
- Logarithmic functions
Finding an item in BST, looking for a name in a address book. Grows faster than constant function
- Linear functions
Looking each item once ( twice, 10 times) in a list/array, counting number of items in a linked list
- Superlinear functions
Quicksort, mergesort, heapsort. Grows little faster than than linear
- Quadratic functions
Bubble sort, selection sort. Cost of looking at most or all pairs of items in an n-element universe
- Cubic functions
Iterate through all triplets of items in a list
- Exponential functions
Iterating all subsets of n items
- Factorial functions
f(n) = n!
Generate all permutations or ordering, Travelling salesman problem brute-force way. Functions like n!
Generate all permutations or ordering, Travelling salesman problem brute-force way. Functions like n!
n! >> 2^n >> n^3 >> n^2 >> nlogn >> n >> logn >> 1