# How does Big-O Notation work, and can you provide an example?

First and foremost, do not even walk into a software interview without knowing what Big O Analysis is all about – you will embarrass yourself. Big O Notation is simply something that you must know if you expect to get a job in this industry. Here we present a tutorial on Big O Notation, along with some simple examples to really help you understand it. You can consider this article to be sort of a big O notation for dummies tutorial, because we really try to make it easy to understand.

## What is Big O Analysis in computer science – a tutorial:

When solving a computer science problem there will usually be more than just one solution. These solutions will often be in the form of different algorithms, and you will generally want to compare the algorithms to see which one is more efficient.

This is where Big O analysis helps – it gives us some basis for measuring the efficiency of an algorithm. A more detailed explanation and definition of Big O analysis would be this: it measures the efficiency of an algorithm based on the time it takes for the algorithm to run as a function of the input size. Think of the input simply as what goes into a function – whether it be an array of numbers, a linked list, etc.

Sounds quite boring, right?

It’s really not that bad at all – and it is something best illustrated by an example with actual code samples.

## Big O Notation Practice Problems

Even if you already know what Big O Notation is, you can still check out the example algorithms below and try to figure out the Big O Notation of each algorithm on your own without reading our answers first. This will give you some good practice finding the Big O Notation on your own using the problems below.

## Big O Notation Examples in Java

Now it’s really time to pay attention – let’s start our explanation of Big O Notation with an actual problem. Here is the problem we are trying to solve:

Let’s suppose that we want to create a function that, when given an array of integers greater than 0, will return the integer that is the smallest in that array.

In order to best illustrate the way Big-O analysis works, we will come up with two different solutions to this problem, each with a different Big-O efficiency.

Here’s our first function that will simply return the integer that is the smallest in the array. The algorithm will just iterate through all of the values in the array and keep track of the smallest integer in the array in the variable called curMin.

Let’s assume that the array being passed to our function contains 10 elements – this number is something we arbitrarily chose. We could have said it contains 100, or 100000 elements – either way it would have made no difference for our purposes here.

## The CompareSmallestNumber Java function

 ```int CompareSmallestNumber (int array[ ]) { int x, curMin; // set smallest value to first item in array curMin = array[0]; /* iterate through array to find smallest value and also assume there are only 10 elements */ for (x = 1; x < 10; x++) { if( array[x] < curMin) { curMin = array[x]; } } // return smallest value in the array return curMin; } ```

As promised, we want to show you another solution to the problem. In this solution, we will use a different algorithm - we will soon compare the big O Notation of the two different solutions below. What we do for our second solution to the problem is compare each value in the array to all of the other numbers in the array, and if that value is less than or equal to all of the other numbers in the array then we know that it is the smallest number in the array.

## The CompareToAllNumbers Java function

```int CompareToAllNumbers (int array[ ])
{
bool is Min;

int x, y;

/* iterate through each element in array,
assuming there are only 10 elements:
*/

for (int x = 0; x < 10; x++)
{
isMin = true;

for (int y = 0; y < 10; y++)
{

/* compare the value in array[x] to the other values
if we find that array[x] is greater than any of the
values in array[y] then we know that the value in
array[x] is not the minimum

remember that the 2 arrays are exactly the same, we
are just taking out one value with index 'x' and
comparing to the other values in the array with
index 'y'
*/

if( array[x] > array[y])
isMin = false;

}

if(isMin)
break;
}

return array[x];
}
```

Now, you've seen 2 functions that solve the same problem - but each one uses a different algorithm. We want to be able to say which algorithm is more efficient using mathematical terms, and Big-O analysis allows us to do exactly that.

## Big O analysis of algorithms

For our purposes, we assumed an input size of 10 for the array. But when doing Big O analysis, we don't want to use specific numbers for the input size - so we say that the input is of size n.

Remember that Big-O analysis is used to measure the efficiency of an algorithm based on the time it takes for the algorithm to run as a function of the input size.

When doing Big-O analysis, "input" can mean a lot of different things depending on the problem being solved. In our examples above, the input is the array that is passed into the different functions. But, input could also be the number of elements of a linked list, the nodes in a tree, or whatever data structure you are dealing with.

Since input is of size n, and in our example the input is an array - we will say that the array is of size n. We will use the 'n' to denote input size in our Big-O analysis.

So, the real question is how Big-O analysis measures efficiency. Basically, Big-O will want to express how many times the 'n' input items are 'touched'. The word 'touched' can mean different things in different algorithms - in some algorithms it may mean the number of times a constant is multiplied by an input item, the number of times an input is added to a data structure, etc.

But in our functions CompareSmallestNumber and CompareToAllNumbers, it just means the number of times an array value is compared to another value.

## Big O notation time complexity

In the function CompareSmallestNumber, the n (we used 10 items, but lets just use the variable 'n' for now) input items are each 'touched' only once when each one is compared to the minimum value. In Big O notation, this would be written as O(n) - which is also known as linear time. Linear time means that the time taken to run the algorithm increases in direct proportion to the number of input items. So, 80 items would take longer to run than 79 items or any quantity less than 79. Another way to phrase this is to say that the algorithm being used in the CompareSmallestNumber function has order of n time complexity.

Subscribe to our newsletter for more free interview questions.

You might also see that in the CompareSmallestNumber function, we initialize the curMin variable to the first value of the input array. And that does count as 1 'touch' of the input. So, you might think that our Big O notation should be O(n + 1). But actually, Big O is concerned with the running time as the number of inputs - which is 'n' in this case - approaches infinity. And as 'n' approaches infinity the constant '1' becomes very insignificant - so we actually drop the constant. Thus, we can say that the CompareSmallestNumber function has O(n) and not O(n + 1).

Also, if we have n3 + n, then as n approaches infinity it's clear that the "+ n" becomes very insignificant - so we will drop the "+ n", and instead of having O(n3 + n), we will have O(n3), or order of n3 time complexity.

## What is Big O notation worst case?

Now, let's do the Big O analysis of the CompareToAllNumbers function. The worst case of Big O notation in our example basically means that we want to find the scenario which will take the longest for the CompareToAllNumbers function to run. When does that scenario occur?

Well, let's think about what the worst case running time for the CompareToAllNumbers function is and use that as the basis for the Big O notation. So, for this function, let's assume that the smallest integer is in the very last element of the array - because that is the exact scenario which will take the longest to run since it will have to get to the very last element to find the smallest element. Since we are taking each element in the array and comparing it to every other element in the array, that means we will be doing 100 comparisons - assuming, of course, that our input size is 10 (10 * 10 = 100). Or, if we use a variable "n" to represent the input size, that will be n2 'touches' of the input. Thus, this function uses a O(n2 ) algorithm.

## Big O analysis measures efficiency

Now, let's compare the 2 functions: CompareToAllNumbers is O(n2) and CompareSmallestNumber is O(n). So, let's say that we have 10,000 input elements, then CompareSmallestNumber will 'touch' on the order of 10,000 elements, whereas CompareToAllNumbers will 'touch' 10,000 squared or 100,000,000 elements. That's a huge difference, and you can imagine how much faster CompareSmallestNumber must run when compared to CompareToAllNumbers - especially when given a very large number of inputs. Efficiency is something that can make a huge difference and it's important to be aware of how to create efficient solutions.

In an interview, you may be asked what the Big-O of an algorithm that you've come up with is. And even if not directly asked, you should provide that information in order to show that you are well aware of the need to come up with an efficient solution whenever possible.

## What about Big Omega notation?

Big O and Big Omega notations are not the same thing. You can read about the differences here: Big O versus Big Omega.

Subscribe to our newsletter for more free interview questions.

• Kyle

Isn’t syntax just about following the language rules so the interpreter can understand your code? I thought this would be considered an optimization question rather than a syntax question.

• Kyle

What math is involved in knowing to reduce n^3 + n to just n^3? Limits don’t work, because there’s no limit as n approaches infinity, so n^3 would be the same as n.

• Kyle

So O(n) means the speed of the algorithm is a function of its input, and O(n^2) means the speed of the algorithm is a function of the square of its input, right? But doesn’t that still mean that you can’t assume that an O(n^2) algorithm will run faster than a O(n) algorithm? Because the speed might be calculated as different functions of different inputs. If the first algorithm is 1000*n and the second algorithm is 100*n^2, then 1000*4 is still greater than 100*4^2. So wherein lies the usefulness of big O notation?

• Kyle

“You might also see that in the CompareSmallestNumber function, we initialize the curMin variable to the first value of the input array. And that does count as 1 ‘touch’ of the input. So, you might think that our Big O notation should be O(n + 1). But actually, Big O is concerned with the running time as the number of inputs – which is ‘n’ in this case – approaches infinity. And as ‘n’ approaches infinity the constant ‘1’ becomes very insignificant – so we actually drop the constant. Thus, we can say that the CompareSmallestNumber function has O(n) and not O(n + 1).”

Wouldn’t it be O(n) anyway? Because you initialize using the first value in the array and then skip over that value in the loop, meaning the number of touches is always just the length of the array, not n + 1.

• +1

• Omar Moodie

Beautiful tutorial. Thank you.

• Amit

good example.Thank you 🙂

• orangetinyterrors

The snippets are a bit strange isolated from the article because of the explicit array size. At first glance, I would have said that the first one was only O(1) because it is O(n) only up to n=10 and O(1) for any n>10. Why not just pass the size into the function to avoid such confusion?

• Shekhar Sherikar

Agreed.
But it seems that increasingly interviews in organizations are taken on the basis of such syntax and jargon which would puzzle a real programmer and venerate anyone with the capability to mug through the questions.
Nice and very lucidly simple tutorial though. Highly appreciated.

• Rupinderjit

Because, if there are infinite number of elements then that (infinite -1) comparison is insignificant. It will be consider as infinite comparison only.

• crazy

why first function is not O(n-1) since we are touching elementes 9 times not 10 times. We are comparing values 9 times .It should be O(n-1)?

• Carl Eckhardt

Simply because you’re only accessing the top of the stack when you pop, unless it’s a fancy queue / stack that pops from deeper into the stack/queue.

• Muaz Aag

Justify
why the complexity of all stack methods are O(1)?. Your justification MUST be supported by reliable sources. help me.. email at muazaag@gmail.com

• MoeMoe

Poor joejoe, if only you could focus and apply as much effort to learning as you do trolling you may be able to understand things such as these

• joejoe

*fear

• joejoe

niceeeee tits

• joejoe

but you still me Me …

• joejoe

poser !!.. itz obvious u did not understang tiss …

• joejoe

wanna piece of tiss …?

• pepe

Fuck you.

• joejoe

vekoz u is stupid .. unlike mi that have a lot of IQ ….
want some fagg ..?

• joejoe

tis is the worst tut0rial evahh …
I have a lot of IQ … so im not stupeed ….
stop thanking this post you Poser NErds ….

• joejoe

shattap Nerd …..

• joejoe

shattap virgin ….

• joejoe

Bullshit ….

• joejoe

Bullshit

• joejoe

those who that this is helpful …. that’s just BULLSHIT …..

• Mayank

Awesome explanation!!

• Zach Peterson

“Remember that Big-O analysis is used to measure the efficiency of an algorithm based on the time it takes for the algorithm to run as a function of the input size. ”
FYI… it doesn’t use TIME, it’s STEPS. Big O does -not- measure time in any way, fashion or form. Why? Because processors, compiler optimization algorithms, etc., all have “isms” that can improve a functions time over what you can describe mathematically.
So, it’s based on STEPS. Steps = complexity.
Big O does not compare to Function Points. The IFPUG (.org) has been doing this for decades longer than Big O has been a buzz ward, and is far more reliable imho. With complexity measurements using function points, I can tell you not only how complex a function is but how long it’ll take to implement and how much it’ll cost you to do so.
Dr. B

• alka

for(icount2=1;icount<=n;icount2=icount2*2)
{
for(icount3=1;icount3<=n;icount3=icount3*2)
{
……
…..}
}
}
what is the time complexity of this code????????
thnku

• Anand Singh

Nice explanation. Good

• Aman Singh

Nice One. Now i understood the whole concept clearly..!!!

• mansi

Nice like it….. Thanks

• Mihir

amazing tutorials! 😀
thanks a lot for this!

• Drone

I like it but I tend to dislike ‘syntaxy’ type questions. I know MANY amazing programmers with an innate ability to create tight, highly efficient algorithms but ask them about ‘Big O Notation’ and they will literally draw a blank – as-in: they’ll look at you like an alien. You see this alot in college (vs university) educated and/or self-taught techs – myself included.

Because of this fact I would rather ask them to SOLVE a problem, then explain to me in their words WHY they chose that solution. In the end, they will give an explanation similar to yours, without ever mentioning this specific jargon (i.e. they’ll explain how the alternative requires 100,000 loops vs. their 10,000 without mention of Big O Notation).

It’s for this same reason that I tend to look beyond syntax when quizing. As I see it, any monkey can memorize syntax, but it requires a brain to logically approach and solve problems!

• Sarvesh Singh

awsum 🙂 🙂 Thx so much

• Chirag

Very Useful..Thanks you!!!

• eriksala

I needed to review this for my class this summer. Thanks.

• sinatra

I didn’t even understand a thing

• Karthik

You made me realize how stupid I was to think that big O is some kind of rocket science. Thank you, very well explained, love the way you present your case.

• mwabini

fantastic, just fantastic. it really has helped me. Keep up!

• alex

not only extremely useful but pleasant to read
amazing site
wit <3 from russia

• vivek

good explanation….

• Sachin

Nice tutorial. Now I don’t fear of Big O any more. 🙂

• Marshell

Clear explanation .. like it !!

• Sanaya Khan

niceeeeeeeeeee

• Suriya Shanmugam

A vivid explanation that I never heard off. Gratitude to you

• Varoon

You're right Nick, it's been fixed. Thanks!

• nick

this is wrong "Big O Notation, also known as Big Omega Notation" Big O is upper bound while Big Omega is lower bound

• anonfromukstudyingcs

Dude.. Thanks for this clear explanation makes more sense now to me!

• saurabh

• David Gao

I probably will attend an technical interview next week, and it is so lucky that I found this website, very useful information.

• Vuyani Billy Nyathikazi

thanks….this is very useful…..i always visit this site and wow..i like the tutorials…very easy to understand and straight forward

thank you sir….it is so use full….clear about the all the things…!

• charan

very useful…keep going..:)

• ITISHA JAIMAN

got everything right…… it's really very useful!!!!!!!!