What is Time Complexity?
Before diving into the bookish definition of time complexity, let's look at two C++ programmes.
Example 1
C++
#include <iostream>
using namespace std;
int main()
{
cout<<"Hello!\n";
return 0;
}

You can also try this code with Online C++ Compiler
Run Code
Output
Hello!
Example 2
C++
#include <iostream>
using namespace std;
int main()
{
int N;
cout<<"Enter the value of N: ";
cin>>N;
for(int i=0;i<N;i++)
{
cout<<"Hello!\n";
}
return 0;
}

You can also try this code with Online C++ Compiler
Run Code
Output (For input N=5)
Hello!
Hello!
Hello!
Hello!
Hello!
In the two codes above:
- Code 1 prints Hello! once and stops the execution.
- In contrast, code 2 is printing Hello! 5 times, or you can say N times, where N is the number entered by the user and then stops the execution.
- When you run them with a C++ compiler, you will see the sentence "The code is executed in 0.2 seconds." Many of us think this is time-complexity, but this is not the case.
Instead, we can infer from the above example that the time required to complete the program is equal to the sum of the number of times each statement gets executed.
More precisely, time complexity deals with the amount of time required by the algorithm or program to execute to completion.
You may be wondering if we use some precise numbers on the clock to express time complexity.
No, the amount of time required for execution is a mathematical quantity, so time complexity must be represented mathematically, using asymptotic notations like Big O, Big Θ and Big Ω.
If you are unfamiliar with asymptotic notation, do not forget to read about it.
How to Find Time Complexity
The steps involved in finding the time complexity of an algorithm are:
- Find the number of statements with constant time complexity (O(1)).
- Find the number of statements with higher orders of complexity like O(N), O(N2), O(log N), etc.
- Express the total time complexity as a sum of the constant.
- Drop the non-dominant terms: Anything you represent as O(N2+N) can be written as O(N2) to get higher-order time complexity.
- Ignore constants; if any, it does not affect the overall time complexity.
For Example, Anything you might think is O(3n) is O(n).
Now that we know how to find the algorithm’s time complexity let us see a few time complexity examples to understand time complexity evaluation better.
Time Complexity Calculation Examples
C++
#include <iostream>
using namespace std;
int main()
{
int a,b,c;
cout<<"Enter two numbers: ";
cin>>a>>b;
c=a+b;
cout<<a<<" + "<<b<<" = "<<c;
return 0;
}

You can also try this code with Online C++ Compiler
Run Code
Output:
Enter two numbers: 2 + 3 = 5
- There are only initialisation, input and output statements, an arithmetic operation, and a return statement in the code above. All these have a time complexity O(1).
- So the time complexity of the entire code is O(1).
C++
#include <iostream>
using namespace std;
int main()
{
int N,sum=0;
cout<<"Enter the length of the array: ";
cin>>N;
int arr[N];
for(int i=0;i<N;i++) //time complexity O(n)
{
cout<<"Enter the element in position "<<i<<": ";
cin>>arr[i];
sum+=arr[i];
}
cout<<"The sum of the elements is "<<sum;
return 0;
}

You can also try this code with Online C++ Compiler
Run Code
Output:
Enter the length of the array: 5
Enter the element in position 0: 1
Enter the element in position 1: 2
Enter the element in position 2: 3
Enter the element in position 3: 4
Enter the element in position 4: 5
The sum of the elements is 15
- In the code above, there is a for loop in addition to initialisation, input and output statements, an arithmetic operation and a return statement.
- Apart from the loop, the rest of the lines have a time complexity O(1).
- The for loop will run n times, so its time complexity will be O(N).
- Thus, the entire code will have a time complexity of O(N + 9), but the constant is usually neglected for large values of N, and the time complexity will be O(N), also known as Linear-Time.
C++
#include <iostream>
using namespace std;
int main()
{
int N,sum=0;
cout<<"Enter the dimensions of the square matrix: ";
cin>>N;
int arr[N][N];
for(int i=0;i<N;i++) //outer loop
{
for(int j=0;j<N;j++) //inner loop
{
cout<<"Enter the element in position "<<i<<","<<j<<": ";
cin>>arr[i][j];
sum+=arr[i][j];
}
}
cout<<"The sum of the elements is "<<sum;
return 0;
}

You can also try this code with Online C++ Compiler
Run Code
Output:
Enter the dimensions of the square matrix: 2
Enter the element in position 0,0: 1
Enter the element in position 0,1: 2
Enter the element in position 1,0: 3
Enter the element in position 1,1: 4
The sum of the elements is 10
- In the above two examples, we already discussed the time complexity of for loop and other statements.
- But here, the loops are nested, which implies that the inner loop will run N times for one iteration of the outer loop.
- Therefore it will run N ✕ N times, so its time complexity will be O(N2). Thus, the entire code will have a time complexity of O(N2 + 9), where 9 is the sum of O(1) time complexity of the remaining statements.
- As we already know, for large values of n, the constant is usually neglected, and hence the time complexity will be O(N2).
- Selection sort, Insertion sort are a few examples of O(N2) time complexity.
C++
#include <iostream>
using namespace std;
int main()
{
int N;
cout<<"Enter a number: ";
cin>>N;
while(N>0)
{
cout<<N<<" ";
N=N/2;
}
return 0;
}

You can also try this code with Online C++ Compiler
Run Code
Output:
Enter a number: 8
8 4 2 1
- In the code above, there is a while loop in addition to initialisation, input and output statements, an arithmetic operation and a return statement.
- Let us examine mathematically how this while loop works here.
- Suppose N=8; then we are getting series of 8 4 2 1
- Likewise, if we try for different inputs like 10, we will get 10 5 2 1
- We can see that in each iteration, the number will be reduced to the previous half until the whole condition is violated.
- Such algorithms of breaking a number or set of numbers into halves fall under logarithmic complexity O(log N).
If you are still unsure, we recommend looking at the larger entries and then checking the log (N) to see if the loop took that long.
Also, you can go through the Binary-Search Algorithm, which takes O(logN) time to understand logarithmic complexity in depth. The list does not end here; we have log-linear time O(NlogN), cubic time complexity O(N3), exponential time complexity O(2n), factorial time complexity O(N!) and many more.
Key Points About Time Complexity
Understanding Big O Notation
Big O notation is a mathematical notation used to describe the efficiency of an algorithm in terms of time and space. It helps in understanding how an algorithm scales as the input size increases. Big O focuses on the worst-case scenario, ensuring performance remains predictable.
For example, searching for an element in an unsorted array using linear search has a time complexity of O(n) because, in the worst case, we may need to check every element. However, using Binary Search (O(log n)), we significantly reduce the number of comparisons, making the algorithm more efficient for large datasets.
Types of Time Complexity
An algorithm runs in constant time if its execution time does not depend on the input size.
Example: Accessing an element in an array using an index (arr[i]).
- Logarithmic Time (O(log n))
An algorithm runs in logarithmic time if the number of operations grows logarithmically as the input size increases.
Example: Binary Search, where the search space is halved in each step.
An algorithm runs in linear time if the execution time increases proportionally with input size.
Example: Iterating through an array to find the maximum value.
An algorithm runs in quadratic time if execution time grows quadratically as input size increases.
Example: Bubble Sort, where every element is compared with every other element.
An algorithm runs in exponential time if its runtime doubles with each additional input element.
Example: Solving the Traveling Salesman Problem using brute force recursion.
Best, Average, and Worst-Case Complexity
The minimum time required for an algorithm to run under optimal conditions.
Example: Searching for an element in an array where it appears at the first index (O(1) for Linear Search).
The expected time complexity when considering all possible inputs.
Example: Searching for an element in an unsorted array where the element is likely to be in the middle (O(n/2) for Linear Search).
The maximum time required to execute an algorithm. This is the most commonly analyzed case to ensure efficiency.
Example: Searching for a non-existent element in an array using Linear Search (O(n)).
Real-World Impact of Time Complexity
Understanding time complexity is crucial for designing efficient algorithms, especially for large datasets.
- Efficient Sorting Algorithms: QuickSort (O(n log n)) performs much better than Bubble Sort (O(n²)) for large datasets.
- Scalability Issues: An inefficient algorithm like Brute Force Password Cracking (O(2ⁿ)) becomes impractical for secure systems with large inputs.
- Machine Learning & Big Data: Algorithms handling large-scale data (e.g., Google Search indexing) must use efficient methods like hashing (O(1)) or Binary Search (O(log n)) for fast retrieval.
Selecting the right algorithm based on time complexity ensures optimal performance, preventing unnecessary computational overhead.
What is Space Complexity?
Space complexity deals with the amount of memory required in a system for a particular algorithm. Now, before we dive deeper into the concept, we must know what auxiliary space is.
Auxiliary space is the extra space used by an algorithm, which on adding with the space taken by the input values, gives us space complexity.
Space complexity = Auxiliary Space + Space taken by input values
Note: Space complexity isn’t the direct sum of the auxiliary space and the space taken by the input values. It is the maximum possible value of the sum for a particular algorithm.
To understand space complexity better, let us consider the simple example of printing the Fibonacci series with and without Recursion. (You can also try for Prime Factorisation Method Using Sieve O(log n) For Multiple Queries)
To understand space complexity better, let us consider the simple example of printing the Fibonacci series with and without recursion.
1. Fibonacci Number Without Recursion
C++
#include<iostream>
using namespace std;
int main()
{
int a = 0, b = 1, c,i,num;
cout<<"Enter the number of elements: ";
cin>>num;
cout<<a<<" "<<b<<" ";
for(i=2;i<num;i++)
{
c = a + b;
cout<<c<<" ";
a=b;
b=c;
}
return 0;
}

You can also try this code with Online C++ Compiler
Run Code
Output:
Enter the number of terms of series: 5
0 1 1 2 3
2. Fibonacci Number With Recursion
C++
#include<iostream>
using namespace std;
int Fibonacci(int n)
{
if((n==1)||(n==0))
return(n);
else
{
return(Fibonacci(n-1)+Fibonacci(n-2));
}
}
int main()
{
int num , i=0;
cout<<"Enter the number of terms of series : ";
cin>>num;
while(i < num)
{
cout<<" "<< Fibonacci(i);
i++;
}
return 0;
}

You can also try this code with Online C++ Compiler
Run Code
Output:
Enter the number of terms of series: 5
0 1 1 2 3
Although the outputs are identical in the two code snippets shown above, the code with recursion consumes more memory than the one without recursion.
The reason behind higher memory usage is the recursion stack.
Questioning how this recursion works?
Don’t worry; CodingNinjas has plenty of blogs, video lectures and practice problems to help you out.
How to Find Space Complexity
The following steps describe how to find the space complexity of an algorithm:
- Find the number of variables of different data types initialised in the algorithm.
- Find the memory required by each data type (this can be done using the sizeof( ) operator. Learn how here).
- Multiply the memory required by each data type with the number of variables of that data type present. The result will give the memory required by each data type.
- Add the memory required by each data type.
- Ignore the constants as explained in time complexity, if any, in the sum to obtain the space complexity in the Big O notation.
Now that we know how to find the space complexity let us see a few examples to understand better.
Space Complexity Calculation Examples
To understand how to calculate the space complexity of an algorithm, let us consider a few examples.
C++
#include<iostream>
using namespace std;
int main()
{
int a, b, c;
cout<<"Enter the value of a and b ";
cin>>a>>b;
c = a + b;
cout<<"The sum is "<<c;
}
return 0;

You can also try this code with Online C++ Compiler
Run Code
- In the code shown above, three integer-type variables are used. Now, the size of integer type variables is usually 2 or 4 bytes depending on the compiler.
- If we assume it to be 4 bytes in our case, the total memory required for the code is 4 x 3 = 12 bytes, which is a constant. So the space complexity is O(1).
C++
#include<iostream>
using namespace std;
int main()
{
int n, sum = 0;
cout<<"Enter the number of elements ";
cin>>n;
int arr[n];
for(int i = 0; i < n; i++)
{
cout<<"Enter the element in position "<<i<<" ";
cin>>arr[i];
sum = sum + arr[i];
}
cout<<sum;
}

You can also try this code with Online C++ Compiler
Run Code
- Here, the array contains n number of elements, and there are three more integer type variables (n, sum, and i).
- So the total memory required here is [(4 x n) + (4 x 3)] = (4n + 12) bytes. In asymptotic notation, the space complexity is O(n).
It would be best to try complex examples on space complexity, like finding the space complexity in Merge-Sort (O(n)).
How Significant Are Space and Time Complexity?
A valid algorithm executes in a finite period of time. The time required by the algorithm to solve a particular problem is referred to as the algorithm's time complexity. In algorithm analysis, time complexity is a very valuable statistic. It is the amount of time required to complete an algorithm. To calculate the time complexity, we must take into account the cost of each fundamental instruction as well as the number of times the instruction is executed.
The amount of memory space that an algorithm or problem requires during execution is referred to as its space complexity. The space complexity is calculated not only by the space needed by the variables in the problem/algo, but it also includes and takes into account the space for input values. Constant space complexity often results in faster execution times because the algorithm does not need to allocate or deallocate memory dynamically.
Importance of Time and Space Complexity in Data Structures
Developers of real-world programs are constrained by the physical memory of the platforms they plan to run on. This is where space complexity comes into play, as we never want to run a function or process that consumes more space than the system has available at any one time. On the other hand, we don't want our operations to take excessive time so that they bog down and slow down our apps.
Also see, Application of Graph in Data Structure
Frequently Asked Questions
How does space complexity affect time complexity?
In cases of higher memory requirements, this will cause memory allocation to get slower, affecting the time complexity. Also, in the case of high volume input data, the algorithm tends to take more time to read, increasing the time complexity as well.
Can time and space complexity be different?
Yes, usually, time and space complexity tend to be different. For example, Linear Search can have a worst-case time complexity of O(N), whereas the space complexity remains O(1). This also varies depending on the input and its variable size.
What is more important, time complexity or space complexity?
Both are equally important; time and space both need to be optimized as much as possible, but this can also vary depending on the problem. Some problems might need to be solved faster, no matter how much space is required, and vice versa.
Can space complexity be larger than time complexity?
Yes, this is definitely possible. In the case of an algorithm that has to write a large amount of output while storing variables and data structures, the space complexity will be higher than the time complexity.
Where is time complexity used?
Time complexity is used in algorithm analysis to measure efficiency, helping developers compare and optimize algorithms. It is essential in data structures, competitive programming, software development, and system design to ensure optimal performance and scalability of applications.
Conclusion
Here, we talked about how time complexity deals with the amount of time required to execute a program, while space complexity deals with the memory requirements. We learned to compute both Time Complexity and Space Complexity and also how important it is to keep time and space complexity in mind to write good code.
Recommended Articles:
Kadanes Algorithm
Design and Analysis of Algorithms