Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
1.1.
Quick Sort
1.1.1.
Best case
1.1.2.
Worst Case
1.1.3.
Average case
1.2.
Merge Sort
1.3.
Heap Sort
1.4.
Radix Sort
1.5.
Bucket Sort
1.5.1.
Worst Case
1.5.2.
Average Case
2.
FAQs
3.
Key Takeaways
Last Updated: Mar 27, 2024

Time & Space Complexity of Sorting Algorithms - 2 (N*logN)

Master Python: Predicting weather forecasts
Speaker
Ashwin Goyal
Product Manager @

Introduction

The Sorting Algorithm is a set of instructions that rearrange a given set of elements according to a comparison operator, which compares the elements according to the conditions provided in the comparison operator. 

Time Complexity is the time an algorithm takes to run to the function of the length of the input array, and Space Complexity is the extra space required by an algorithm as a function of the input array size. Time and Space Complexities are two crucial aspects of any algorithm. It determines the efficiency of that algorithm. 

Further in this article, we will discuss the time and space complexity of different sorting algorithms having O(n log n) complexity, along with examples of every algorithm's best case, worst case, and average case. 

We will see how some algorithms perform better than others in particular instances and vice-versa.

Note: In this article, 'n' is used as the size of the input array wherever applicable.

Recommended topic, kth largest element in an array

Quick Sort

Quicksort is a divide and conquer algorithm. In this algorithm, we pick a pivot element and partition the array based on the pivot element, i.e., we place the pivot element at its correct position in the sorted array, and the smaller elements are placed on its left side. Greater elements are placed on its right side. There are different methods to determine the pivot element, and it is always preferred to be done in constant time. Quick Sort algorithm can be summarised as follows:

  1. Determine the pivot element
  2. Partition the given array based on pivot element
  3. Repeat steps 1 and 2 for the sub-arrays on the left and right sides of the pivot element.

We can maintain a pointer to the smaller element. While traversing the array, whenever we find an element smaller than the pivot, we swap it with the pointer position element and increase it by one. We can see that the partition operation takes linear time. Now, every time we place a pivot element in its correct position, we divide the array into two halves. If we consider that in each function call, the array is divided into two equal parts, then the size of the function call stack will be O(log n), and we know that here each function call takes O(n) time (due to partition operation). So the overall time complexity of the algorithm is O(n log n). The recurrence relation of this algorithm is: T(n) = 2T(n/2) + 𝛳(n).

We will now discuss cases where quicksort performs the best and the worst. We will also discuss its performance on average. We will take the pivot element as the first element on the array.

Best case

Quicksort performs best when the pivot element is the median element, i.e., it divides the array into two equal parts. In this, the height of the function call stack is log(n), and each function call takes O(n) time. So, the overall time complexity will be O(n log n). We need O(log n) space for the function call stack. Recurrence relation for the best case can be written as: T(n) = 2T(n/2) + 𝛳(n)

Example: 4, 2, 1, 3, 5, 6

Pivot element: 4

Left subarray: 2, 1, 3

Right subarray: 5, 6

We can see that size of the left and right subarray differ only by one.

Worst Case

Quicksort performs worst when the pivot element divides the array into two parts where one part has a size of (n-1), and the other part is of zero size. This will lead to a function call stack size of O(n), and each function call is O(n). So time complexity will be O(n2), and we need O(n) space to store the function call stack. Recurrence relation for worst-case can be written as: T(n) = T(n-1) + 𝛳(n)

Example: 1, 2, 3, 4, 5, 6

Pivot element: 1

Left subarray: NULL

Right Subarray: 2, 3, 4, 5, 6

We can see that the left subarray has 0 sizes, and the right subarray has (n-1) size.

Average case

On a random array, quick sort works in O(n log n) time and O(log n) space on an average.

We consider the average case where (n/9) number of elements are placed in one group and (9n/10) number of elements placed in another group.

The recurrence relation of this case can be written as:

T(n) = T(n/10) + T(9n/10) + 𝛳(n)

Merge Sort

Merge sort is a divide and conquer algorithm like quicksort. It divides the array into two equal-sized subarrays and calls itself for both subarrays. After this, the two subarrays are merged into one array. Merge Sort algorithm can be summarised as follows:

  1. Find a middle element to divide the array into two halves
  2. Call merge sort for the left half
  3. Call merge sort for the right half
  4. Merge the left and right halves

The left and right halves will be sorted in themselves before step-4. So, we can merge the left and right halves in linear time by maintaining two pointers in the two subarrays and comparing their values. The size of the function call stack of merge sort will be O(log n) in every case as the array is always divided into two equal-sized subarrays, and the merge operation takes O(n) time. So, the overall time complexity will be O(n log n), and we need O(n) extra space in the merge operation. We can write its recurrence relation as 

T(n) = 2T(n/2) + 𝛳(n)

Heap Sort

Heap Sort uses the concept of Binary Heap for sorting a given array. A Binary Heap is a complete binary tree that can be represented as an array. Heap Sort consists of the following steps:

  1. Build a max heap from the given array.
  2. Swap the root of the tree with the last element and reduce the size of the tree by one.
  3. Heapify the root of the tree
  4. Repeat steps 2 and 3 till the size of the tree is greater than one

We know that building a max heap takes O(n) time, and the heapify function takes O(log n) time. We are calling the heapify function every time we remove the largest element from the top, i.e., n times. So overall, the time complexity will be O(n log n). There is no need for extra space in Heap Sort, So space complexity is O(1).

You can refer to this article for more information on max-heap and heapify functions.

Radix Sort

In Radix Sort, the array is sorted digit by digit from the least significant digit to the most significant digit. We can use any stable sorting algorithm to sort according to numbers. The sorting algorithm that we use must be stable as the elements must appear in the same order as those in the original array. We can use counting sort here to sort individual digit places as it is stable and the digits have a small range of 0-9, so it will perform better with O(n) time complexity.

Overall, it takes O(n k) time in radix sort in all cases because we have to compare all the digits of all elements in the array to sort them. Now, to sort each radix(sorting according to a particular digit place), we use counting sort, and counting sort creates an auxiliary array of size k, which is used to store the count of each element. So space complexity is O(k).

Note: k is the maximum number of digits in the elements of an array.

For example- [ 1, 2, 33, 199 ]

Here k=3 because the maximum number of digits is three, in the '199' element.

Let’s observe the working of Radix Sort:

Given unsorted array- 20, 25, 90, 2, 102, 900

We can write this array as: 020, 025, 090, 002, 102, 900

020, 090, 900, 002, 102, 025 ->O(n) time

900, 002, 102, 020, 025, 090 ->O(n) time

002, 020, 025, 090, 102, 900 ->O(n) time

We performed O(n) operations three times, where 3 is the value of k here. So, overall times complexity is O(NK).

Bucket Sort

In the Bucket Sort algorithm, the elements are divided into n different buckets of subsequent ranges, and then individual buckets are sorted. After each bucket is sorted, they are merged. Each bucket will have a single element in an ideal case, so it is already sorted. If the buckets contain more than one element, then for sorting individual buckets, we can either recursively call the same bucketSort function or use any other sorting algorithm. The time complexity to sort separate buckets is O(n) and to merge the buckets is O(k), where k is the number of buckets. So the overall time complexity is O(n+k), and when the number of buckets is equal to n, time complexity becomes O(n). The space complexity of this algorithm is O(n+k) to store the elements in buckets. Now, let's see the cases where this algorithm performs the best and the worst.

Best Case

The best case occurs when the given elements are distributed uniformly in the buckets, and the elements in each bucket are already sorted. Considering that it takes linear time to determine that the array is already sorted, we get a time complexity of O(n+k) where O(k) is the complexity of sorting the elements in a bucket and O(n) is the complexity of making and merging the buckets.

Example: arr[ ]=[ 45, 46, 47, 48, 49 ]

We have 5 buckets B1, B2, B3, B4, B5

B1: 45

B2: 46

B3: 47

B4: 48

B5: 49

Here the elements are distributed uniformly over all the buckets.

Worst Case

When the elements are in close range, they are likely to accumulate in the same bucket. In this case, the time complexity depends on the sorting algorithm used. If insertion sort is used, then in the worst case, it can take O(n2) time.

Example: arr[ ]=[ 45, 46, 47, 48, 49 ]

We have five buckets B1, B2, B3, B4, B5

B1: NULL

B2: NULL

B3: 45, 46, 47, 48, 49

B4: NULL

B5: NULL

Here all the elements lie in the same bucket.

Average Case

When the elements are distributed randomly over a range, even if they are not uniformly distributed, it takes linear time on average.

Summarising the time complexities of every algorithm:

Algorithms Best-Case Average Worst-Case
Quick Sort O(n log n) O(n log n) O(n2)
Bubble Sort O(n) O(n2) O(n2)
Merge Sort O(n log n) O(n log n) O(n log n)
Insertion Sort O(n) O(n2) O(n2)
Selection Sort O(n2) O(n2) O(n2)
Heap Sort O(n log n) O(n log n) O(n log n)
Radix Sort O(n k) O(n k) O(n k)
Bucket Sort O(n+k) O(n+k) O(n2)

Read More - Time Complexity of Sorting Algorithms and  Euclid GCD Algorithm

FAQs

  1. Which sorting algorithm is used in linked lists?
    Merge sort is used to sort linked lists.
     
  2. Which sorting algorithm is used when data is uniformly distributed over a range?
    Bucket Sort as it divides the range into different buckets.
     
  3. What is the time complexity of the heapify function?
    O(log n)
     
  4. What is the time complexity to build a heap?
    O(n). It is often mistaken by O(n log n).
     
  5. Which algorithm is used in the sort() function in C++ STL?
    IntroSort, which is a hybrid algorithm of quick sort, heap sort, and insertion sort, is used in the sort function.
Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

Key Takeaways

In this article, we have learned about the time and space complexity of different sorting algorithms.

To learn more about sorting algorithms like the bubble sort, insertion sort, and selection sort, check out - Time & Space Complexity of Sorting Algorithms (Quadratic time algorithms).

I hope you have learned something. Don't stop here; check out some sorting algorithm problems to further strengthen your concepts.

Recommended Problems:

Happy Learning Ninja :) 

Live masterclass