The article discusses the basic Sorting algorithms with the time complexity of quadratic (O(N2)) or worse. The algorithms discussed are bubble sort, selection sort and insertion sort. We will be covering the algorithms along with their pseudocode, the C++ code and the best, worst and average-case time complexity.
Bubble Sort
This algorithm sorts the array by continuously swapping adjacent elements. The larger of the two numbers gets swapped to the right. This way, we bubble the larger numbers to the right of the array. During the first pass of the inner loop, the largest number gets bubbled to the right, and we end up with n-1 unsorted elements. We repeat the above process for n-1 elements now. Every pass, we similarly reduce the size of the unsorted array and stop when the size reduces to 1, as a single element by itself is always sorted.
We could make the algorithm slightly better by ending the execution if we didn't have any swaps in the inner array (implying that the part we assumed unsorted was already sorted).
func bubble_sort(a, n)
for i from 0 to n-2:
count_swaps = 0
for j from 0 to n - i - 2:
If a[j] is greater than a[j+1]:
swap(a[j], a[j+1])
increase count_swaps by 1
If count_swaps is 0:
// breaks out of the outer loop
break
Code in C++
#include <bits/stdc++.h>
using namespace std;
void bubble_sort(int a[], int n){
for(int i = 0; i < n - 1; i++){
int count_swaps = 0;
for(int j = 0; j < n - i - 1; j++){
if(a[j] > a[j+1]){
swap(a[j], a[j+1]);
count_swaps += 1;
}
}
if(!count_swaps) break;
}
}
void print_arr(int a[], int n){
for(int i = 0; i < n; i++){
cout << a[i] << " ";
}
cout << "\n";
}
int32_t main(){
int a[] = {5, 4, 3, 2, 1, 0};
int n = sizeof(a) / sizeof(a[0]);
cout << "Unsorted Array\n";
print_arr(a, n);
bubble_sort(a, n);
cout << "Sorted Array\n";
print_arr(a, n);
}
The time complexity of the algorithm is of the order of O(N2) in the worst case (when the input is sorted in reverse order).similarly, in the average case it is O(N2) (as the average inversion count is also (N*(N-1))/4). And O(N) in the best case (when the array is already sorted).
The algorithm's idea is to find the minimum element in the unsorted array and swap it with the leftmost element of the array, the size of the unsorted section reduces by 1, and we continue finding the minimum again for the unsorted part and swap with the leftmost element of the unsorted part. We continue this process similarly until the size of the unsorted part of the array reduces to 1 (a single element by itself is sorted).
For example,
a[] = [3, 4, 1, 2]
=> [3, 4, 1, 2], minimum element from index 0 to n-1 is at index 2 i.e. a[2]=1,so swap(a[0], a[2]). Now a[0] is at it’s correct position.
=> [1, 4, 3, 2], minimum element from index 1 to n-1 is at index 3, swap(a[1], a[3]), Now a[0…1] is at it’s correct position.
=> [1, 2, 3, 4], minimum element from 2 to n-1 index is at index 2, swap(a[2], a[2]), Now a[0…2] is at it’s correct position.
=> [1, 2, 3, 4], the array is sorted
Pseudocode
func selection_sort(a, n)
for i from 0 to n-2:
min = i
for j from i+1 to n - 1:
If a[j] is less than a[min]:
min = j
swap(a[i], a[min]);
Code in C++
#include <bits/stdc++.h>
using namespace std;
void selection_sort(int a[], int n){
for(int i = 0; i < n - 1; i++){
int mn = i;
for(int j = i + 1; j < n; j++){
if(a[mn] > a[j]){
mn = j;
}
}
swap(a[mn], a[i]);
}
}
void print_arr(int a[], int n){
for(int i = 0; i < n; i++){
cout << a[i] << " ";
}
cout << "\n";
}
int32_t main(){
int a[] = {5, 4, 3, 2, 1, 0};
int n = sizeof(a) / sizeof(a[0]);
cout << "Unsorted Array\n";
print_arr(a, n);
selection_sort(a, n);
cout << "Sorted Array\n";
print_arr(a, n);
}
The time complexity of the algorithm is of the order O(N2) in the worst case (when the input is sorted in reverse order). Similarly in the average case, and in the worst-case also it has a time complexity of O(N2) as the number of iterations of the inner loop is entirely independent of the input array.
In this algorithm, we sort the array as we go. A single element by itself is sorted, so we assume the array containing just the first element is sorted.
Let's say the array from index 0 to index i is sorted, check the next element, i+1th element (referred to as unsorted element) and try to find the first index in the sorted part where the i+1th element can be placed such that all elements towards the left of this position are smaller or equal to i+1th element and all elements towards the right are greater or equal to the i+1th element. The array obtained after the above steps from 0 to i+1 is sorted. Hence, we can use the fact that a single element is sorted as a base case and say by the inductive argument that we can sort an array.
For example,
a[] = [2,4, 3, 1]
=> [2, 4, 3, 1], the single element 2 is already sorted so no need to do anything
=> [2, 4, 3, 1], element 4 is the largest when compared to sorted part, it’s position stays the same.
=> [2, 3, 4, 1], the element 4 is shifted up and 3 is placed in the open position created.
=> [1, 2, 3, 4], 1 is the smallest element and so all the elements in the sorted part move up 1 step and 1occupies the first position.
=> [1, 2, 3, 4], the array is sorted
Pseudocode
func insertion_sort(a, n)
for i from 1 to n-1
unsorted_element = a[i]
j = i-1
while j is greater than -1 and a[j] > unsorted_element:
assign a[j] to a[j+1]
j -= 1;
a[j+1] = unsorted_element
Code in C++
#include <bits/stdc++.h>
using namespace std;
void insertion_sort(int a[], int n){
for(int i = 1; i < n; i++){
int unsorted_element = a[i];
int j = i - 1;
while(j >= 0 && a[j] > unsorted_element){
a[j+1] = a[j];
j -= 1;
}
a[j+1] = unsorted_element;
}
}
void print_arr(int a[], int n){
for(int i = 0; i < n; i++){
cout << a[i] << " ";
}
cout << "\n";
}
int32_t main(){
int a[] = {5, 4, 3, 2, 1, 0};
int n = sizeof(a) / sizeof(a[0]);
cout << "Unsorted Array\n";
print_arr(a, n);
insertion_sort(a, n);
cout << "Sorted Array\n";
print_arr(a, n);
}
The time complexity of the algorithm is of the order O(N2) in the worst case (when the input is sorted in reverse order), in the average case is similarly O(N2), and in the best case, we have O(N) (when the array is already sorted).
Take a look at this video for a better approach and proper code implementation of bubble sort, selection sort, and insertion sort with their time complexity and space complexity.
A Table Summary of quadratic time sorting algorithms
Sorting Algorithm
Worst-Case time complexity
Average Case time complexity
Best Case time complexity
Short description
Bubble Sort
O(N2)
O(N2)
O(N)
Bubble larger elements to the right and keep reducing the size of the unsorted array until the whole array is sorted.
Selection Sort
O(N2)
O(N2)
O(N2)
Keep placing the minimum element in the unsorted portion of the array at the leftmost position of the unsorted part. This reduces the unsorted array's size and continues until the whole array is sorted.
Insertion Sort
O(N2)
O(N2)
O(N)
Keep increasing the size of the sorted portion of the array by checking for all elements greater than the new element in the previously sorted part, shifting them up by one and placing the new element at the newly created open position. Repeat until the whole array is sorted.
FAQs
What are some algorithms that have better than quadratic time complexity? Some famous algorithms that perform better than quadratic are heap Sort, Merge Sort, Quick Sort (on average), etc. These algorithms run in O(N*log(N)) time and significantly improve compared to the discussed quadratic algorithms.
What are inversions in an array? Inversions are the count of all unique pairs (i, j), where i < j, but a[i] > a[j] in an array. This measures how sorted the array is, an inversion count of 0 being a sorted array and (N*(N-1))/2 being a reverse sorted array.
How can we relate inversion count to bubble sort or insertion sort? Every time we swap bubble sort, we decrease the inversion count by 1, essentially the number of times the inner loop condition is true is the same as the inversion count. In insertion sort, the number of positions the unsorted element shifts is the amount the inversion count reduces. This should also explain why the best case of both algorithms occurs when the array is sorted, i.e., its inversion count is 0.
What is short-circuiting of expressions in C++? This is a compiler optimization where we only evaluate part of the expressions in if-clauses if the execution of the rest doesn't change the result. For example, let j = 3, then to evaluate j < 2 && j > 0, the compiler will not evaluate j > 0 because j < 2 is true. We used short-circuiting in insertion sort code to not allow a[j] > unsorted_element to execute if j is negative. It ensures that we don't access negative indexes in an array.
Among the three discussed algorithms, who performs best usually? Selection sort usually performs better than bubble sort as it only needs to perform O(N) swaps, whereas the number of swaps in bubble sort on average is of the order O(N2). Insertion sort is more efficient if the array is close to sorted. Insertion sort takes O(N2) writes to the array, and it would be more efficient to use Selection Sort when write operations are expensive.
What are stable sorting algorithms? A stable sorting algorithm keeps identical elements in the same order in the output as the input. Bubble Sort and Insertion Sort are stable sorting algorithms, while Selections Sort is not.
Key Takeaways
This article covers bubble sort, selection sort and insertion sort. It also discusses inversions in an array, stable sorting, and the basics of time complexity analysis to better understand these algorithms' behaviour. Read about Time Complexity Analysis and Count Inversions to understand the article better.
Learn more about the C++ STL library from here if you have trouble understanding some part of the code. Visit the link here for carefully crafted courses on campus placement and interview preparation on coding ninjas.