What is the difference between Big O and Big Omega notations?

The difference between Big O notation and Big Omega notation is that Big O is used to describe the worst case running time for an algorithm. But, Big Omega notation, on the other hand, is used to describe the best case running time for a given algorithm.

Let’s go through a simple example to show you the difference between Big O and Big Omega.

An example algorithm to compare Big O and Big Omega notations




If you have read our tutorial on Big O Notation, then you are already familiar with the example below. In either case, the example below is based on this problem:

Let’s suppose that we want to create a function that, when given an array of integers greater than 0, will return the integer that is the smallest in that array.

Now, here is a solution to the problem where we just compare each value in the array to all of the other numbers in the array, and if that value is less than or equal to all of the other numbers in the array then we know that it is the smallest number in the array.


int CompareToAllNumbers (int array[ ])
{
bool is Min;

int x, y;

/* iterate through each element in array,
    assuming there are only 10 elements:
*/

for (int x = 0; x < 10; x++)
{
	isMin = true;

	for (int y = 0; y < 10; y++)
	{

	/* compare the value in array[x] to the other values
	if we find that array[x] is greater than any of the 
        values in array[y] then we know that the value in 
        array[x] is not the minimum
	
	remember that the 2 arrays are exactly the same, we 
        are just taking out one value with index 'x' and 
        comparing to the other values in the array with 
        index 'y'
	*/	
		
	if( array[x] > array[y])
	isMin = false;	
	
	}
		
	if(isMin)
		break;
}
	
	return array[x];
}

The worst case running time is our Big O

Let’s try to find the worst case running time for the solution given above. This will be our Big O for this algorithm. So, for the CompareToAllNumbers function, let’s assume that the smallest integer is in the very last element of the array – because that is the exact scenario which will take the longest to run since it will have to get to the very last element to find the smallest element. Since we are taking each element in the array and comparing it to every other element in the array, that means we will be doing 100 comparisons – assuming, of course, that our input size is 10 (10 * 10 = 100). Or, if we use a variable “n” to represent the input size, that will be n2 ‘touches’ of the input. Thus, this function uses a O(n2) algorithm. This is the Big O of the algorithm – but not the Big Omega!

The best case running time is our Big Omega

What would be the best case scenario in our algorithm above? Well, what if the smallest integer is in the very first element of the array? Then, if our input array is of size 10, then the algorithm will only need to do 10 comparisons of the first element to all the other elements. And, if we use a variable “n” to represent the input size, that will be n ‘touches’ of the input. This is clearly the best case running time. Because Big Omega is used to represent the best case running time, the Big Omega of this algorithm is defined as Omega(n).

Big Omega is for lower bound, Big O is for upper bound

You will also often hear the term “bound” being used with regards to Big O and Big Omega. What that means is that Big O is used to represent the upper bound running time for an algorithm, which is also the “worst case”. Big Omega is used to represent the lower bound, which is also the “best case” for that algorithm.

Now, hopefully you understand the major difference between Big Omega and Big O notation!

Hiring? Job Hunting? Post a JOB or your RESUME on our JOB BOARD >>

Subscribe to our newsletter for more free interview questions.

  • Pallavi

    Doesn’t f(n) = Big O (g(n)) mean that for all inputs complexity is less than g(n) but for some input it is equal to g(n). that’s the analogy is f(n) <= g(n) ? and not f(n) < g(n)

  • Josh

    What Marc said.

    Big-O/Omega/Theta all pertain to the worst-case time complexity of an algorithm. To reiterate on Marc’s comment, Big-O signifies the upper bound on the worst case running time for an algorithm for ALL inputs of any size.

    This means that to show an algorithm is Big-O of g(n), a function on the input size n, one must prove that every input of size n runs in less “steps” than g(n).

    Big-Omega also describes the worst case running time of an algorithm, but the main difference from Big-O is that, to show that an algorithm is Big-Omega of some function g(n), one must show that there exists some input type x, of any size n, that takes more steps than g(n).

    For example, to prove that the operation/algorithm INSERT(S, x) is O(log n), where S is a max heap, and x is some value, you show that for every type of input x, the algorithm must run in less time than log(n), where n is the size of the input, for ALL input sizes.

    To prove it is Omega(log n), it suffices to provide the type of input x, that makes the algorithm run in greater time than g(n) for all input sizes n. In this case, x can be a value that is greater than max(s), which would cause the algorithm to always take at least log(n) steps since it needs to bubble up x to the root of the heap from a leaf node. The amount of swaps will be equal to the height of the heap, which is log(n) by definition.

  • Nupur

    very well explained

  • sumeet

    Excellent ….!!!!!!!!!

  • Marc

    This is a common misconception. This notation refers to the bounds and only the bounds. The worst case running time can be upper and lower bounded. When the bounds are equal you can say the algorithm is worst case Big-Theta. For example, the worst case running time of Bubble Sort is Big-Theta(n^2) (that is it’s also Big-Omega and Big-O of n^2). However, it is best case Big-Theta(n).

  • DJKBoss of my own life

    good article….helped a lot really…

  • Aslam

    Good post sir , Thanks.

  • Jenni Lee

    Very Helpful dude !!! You are way more better than my professor ( who took a month to explain this stuff and still makes it confusing ). Way to Go !!

  • vishal

    great helped a lot to understand the concepts