**Introduction**

Big O notation, i.e. expressing the time/space complexity of an algorithm in terms of Big O, comes in the role when you want to find the time/space consumed by your algorithm.

Because we all know one thing finding a solution to a problem is not enough but solving that problem in the minimum time/space possible is also necessary.

Although you don’t care too much about how your algorithm is performing or taking time/space unless you’re working in a competitive environment, real-life projects, or doing competitive programming because these are the places where you would be required to not only find the solution but also optimizing it wherever possible in terms of space/time.

If we talk about competitive programming, then you want your algorithm to be faster than others. Otherwise, you won’t be able to perform better than others,

On the other hand, when you’re working on real-life projects or as a software developer, you want your application's loading time or other tasks should be executed fast, even in terms of space complexity.

In other words, you can say you’re on the right track to learning __Data Structure____s__ and algorithms or your development skills when you start looking at the complexity of your algorithm.

But from all this, we got to know what the complexity is and why we’re studying it, but what is Big O, **Big O establishes a worst-run time.**

I have a nephew named Jay, and he has a billion toys!

Kidding, of course, but he has lots of toys, and he’s messy at keeping his toys in place. Due to this, when his friends come to the house to play then, it takes him forever to find the particular toy, and he wants to find the toy as soon as possible and comes to me, so I asked him to clean his place and keep the toys in a good manner, but he does not want to listen to this as you know Kids!

So basically, he wants to create a search algorithm for finding the toy. He’s confused between the simple search and binary search (Don’t bother if you don’t know about these.)

The trick is algorithm should be fast as well as correct. On the one hand, Binary search is faster, and Jay has about 20 seconds before his friend gets bored looking for a particular toy.

On the other hand, a simple search is easy to write and has less chance of errors also, and he came to me for help. I can’t be that one where my solution fails in front of a kid it would be embarrassing.

But I want to make him understand both approaches, so he’s not embarrassed in front of his friends. So we started taking 100 toys.

Let’s think it takes one millisecond to find one toy.

With a simple search, Jay will have to check 100 toys looking for the particular toy until he finds it, so this approach takes him 100ms to run.

On the other hand, he only has to check only seven toys with a binary search. How? Basically (log2 100 is almost 7, but what is this math thing, right?) Don’t worry it’s not the main part of the article.

Image Source: Binary Search in Java without Recursion | Algorithm, Linear search, Insertion sort algorithm (pinterest.com)

So what we get to know from the above picture is that binary search is faster right? No, it’s way more than that when the no. of toys increases. If there are 1 billion toys and it takes 30ms (log2 1,000,000,000 is almost 30), then it would be more like 33 million times faster.

Image Source: images (241×209) (gstatic.com)

One thing we got to know from linear search vs binary search is how the running time of the algorithm increases as the no. of toys (list items) increases.

That’s where Big O comes in role.

Big O tells you how fast your algorithm is.

Consider a case in which you have a list of sizes n. A simple search requires checking each element, so it will take n operations, so the run time in Big O notation is O(n).

Where are the seconds?

There are none – Big O doesn’t tell you the speed of the algorithm in seconds. Big O lets you compare the number of operations.

Also see, __Longest Common Substring____ __and __Rabin Karp Algorithm__

**How will you Compare Algorithms?**

Let’s take an example. To solve the same problem, you have two functions:

```
f(n) =4n^2 +2n+4 and g(n) =4n+4
For f(n) =4n^2+2n+4
so here
f(1)=4+2+4
f(2)=16+4+4
f(3)=36+6+4
f(4)=64+8+4
```

As you can see here, the contribution of n^2 is a quadric function whose value will increase with the increase in the value of n. So when their value of n becomes very large, the contribution of n^2 will be 99% of the value on f(n).

So here, low-order terms can be ignored as they are relatively insignificant, as described above.

In this f(n), we can ignore 2n and 4 in comparison to n^2.

So,

```
n^2+2n+4 ——–>n^2. So here n^2 is highest rate of growth.
For g(n) =4n+4
so here
g(1)=4+4
g(2)=8+4
g(3)=12+4
g(4)=16+4
```

As you can see here, the contribution of n increases when we increase the value of n. So when the value of n becomes very large, the contribution of n will be 99% of the value of g(n).

Therefore, we can ignore low-order terms as they are relatively insignificant. In this g(n), four and also four as __constant multipliers__ can also be ignored as it will have no effect as seen above so 4n+4 ——–>n.

So here n is the highest rate of growth.

Read More About, __Data Structures and Algorithms__