Partition Algorithm in Quick Sort
The partition algorithm is what makes quick sort effective. Once a pivot is chosen, the partition algorithm rearranges the elements of the array so that all elements less than the pivot come before it, and all elements greater than the pivot come after it. This step is crucial because it ensures that the pivot is in its final position in the sorted array.
Here's a simple breakdown of how the partitioning works:
-
Start with two pointers: one at the beginning of the array segment (excluding the pivot) and one at the end.
-
The pointer at the beginning moves to the right until it finds an element greater than the pivot. Similarly, the pointer at the end moves to the left until it finds an element less than the pivot.
-
If the pointers have not yet crossed each other, swap the elements at these pointers, and then continue moving the pointers.
- Once the pointers cross, swap the last element found by the left pointer with the pivot. This puts the pivot in its correct position.
This method effectively splits the array into two parts, each of which can then be sorted independently in a similar manner. The process is repeated recursively, gradually leading to a fully sorted array.
Illustration of Quicksort
To better understand how quick sort works, let’s look at an example step-by-step. Imagine we have an array of numbers that we want to sort in ascending order: [10, 80, 30, 90, 40, 50, 70].
-
Choosing a Pivot: We select the last element of the array as the pivot, which is 70.
-
Partitioning the Array: Using the partition algorithm:
-
The first pointer starts at the beginning, and the second pointer starts just before the pivot.
-
Move the first pointer right until an element larger than the pivot is found (80).
-
Move the second pointer left until an element smaller than the pivot is found (50).
-
Swap these elements. The array now looks like: [10, 50, 30, 90, 40, 80, 70].
-
Continue this process until the pointers cross.
-
Swap the pivot with the first element greater than it encountered by the second pointer. The array becomes: [10, 50, 30, 40, 90, 80, 70].
-
Recursive Sorting: Now, apply quick sort to the sub-arrays formed by dividing at the pivot’s new position.
-
Left sub-array: [10, 50, 30, 40]
-
Right sub-array: [90, 80]
By recursively applying these steps, each part of the array is sorted until the entire array is ordered.
Code Implementation of the Quick Sort
Here’s a simple Python code for quick sort:
Python
def quick_sort(arr):
if len(arr) <= 1:
return arr
else:
pivot = arr.pop() # Taking the last element as pivot
greater = []
lesser = []
for x in arr:
if x > pivot:
greater.append(x)
else:
lesser.append(x)
return quick_sort(lesser) + [pivot] + quick_sort(greater)
# Example usage
example_array = [10, 80, 30, 90, 40, 50, 70]
sorted_array = quick_sort(example_array)
print("Sorted Array:", sorted_array)

You can also try this code with Online Python Compiler
Run Code
Output
Sorted Array: [10, 30, 40, 50, 70, 80, 90]
Explanation of the Code
-
Function Definition: quick_sort(arr) defines the sorting function which takes an array arr as input.
-
Base Case: If the array has one or no elements, it's already sorted, so we return it.
-
Pivot Selection: The last element of the array is chosen as the pivot, and it’s removed from the array to simplify partitioning.
-
Partitioning: We iterate through the array, placing elements smaller than the pivot into lesser and those greater into greater.
-
Recursive Calls: The function calls itself for the lesser and greater arrays, which continues until the base case is met for all sub-arrays.
- Concatenation: Finally, the sorted lesser array, the pivot, and the sorted greater array are concatenated and returned.
Complexity Analysis of Quick Sort
Understanding the time complexity of quick sort is crucial for assessing its efficiency. The performance of quick sort can vary significantly depending on the choice of the pivot and the arrangement of the input data.
-
Best Case Scenario: When the pivot divides the array into two nearly equal halves, we achieve the most efficient scenario. In this case, the time complexity is O(nlogn), where n is the number of elements in the array. This is because each level of partitioning (where the entire array is divided roughly in half) requires linear time, and the depth of the recursive calls is logarithmic relative to n .
-
Worst Case Scenario: The worst case occurs when the pivot is the smallest or largest element in the array, causing one sub-array to have n−1 elements and the other to have none. Here, the time complexity degrades toO(n2). This scenario involves partitioning each level of recursion having nearly all the elements, leading to a quadratic number of operations.
-
Average Case Scenario: For a randomly ordered array, quick sort tends to perform very efficiently with a time complexity of O(nlogn), much like the best case. This average efficiency is one reason why quick sort is favored in many practical applications.
- Space Complexity: Quick sort is a recursive algorithm, and in the worst case, it requires O(n) space on the call stack due to recursion. However, this can be reduced to O(logn) in practical implementations using tail recursion or by optimizing the order of recursive calls.
Advantages of Quick Sort
-
Efficient on Large Lists: Quick sort is remarkably efficient for sorting large datasets. Its average and best-case time complexity of O(nlogn) makes it faster than other simple sorting algorithms like insertion and selection sorts, which have average complexities of O(n2 ).
-
Space Efficient: Despite being a recursive algorithm, quick sort can be implemented to use minimal additional space. In-place versions of quick sort only require a small amount of extra storage space for the stack, making it highly space-efficient.
-
Adaptable to Various Data Structures: Quick sort can be easily adapted to sort not only arrays but also lists and other data structures. It can handle arrays of any type and size, making it versatile across different programming needs.
-
Locality of Reference: Quick sort has good cache locality, which can be a significant advantage on modern computer architectures. This means it makes efficient use of data that has been loaded into the cache from memory, improving overall sorting speed.
-
Easily Parallelizable: The partitioning of quick sort allows it to be easily parallelized. This means that different parts of the array can be sorted concurrently, taking advantage of multi-threading capabilities of modern processors to achieve even faster sorting times.
- Flexible with Pivot Choices: The flexibility in choosing the pivot—whether it's the first element, the last, the middle, or a random element—allows optimizations based on the characteristics of the data being sorted. This adaptability can lead to performance improvements in specific scenarios.
Disadvantages of Quick Sort
-
Worst Case Performance: In its worst-case scenario, when the pivot selection continually results in the most unbalanced partitions (for instance, when the smallest or largest element is always picked as the pivot), quick sort suffers from a time complexity of O(n2 ). This makes it less efficient than other O(nlogn) sorting algorithms like merge sort under specific circumstances.
-
Sensitive to Pivot Selection: The efficiency of quick sort heavily depends on the method used to choose the pivot. Poor pivot selection can degrade its performance significantly, as mentioned, leading to the worst-case time complexity.
-
Recursive Overheads: Quick sort is a recursive algorithm, which can lead to significant overhead due to recursive calls, especially in languages and environments where recursion does not optimize well. This can also lead to stack overflow if the recursion goes too deep.
-
Not Stable: A stable sort keeps records with equal keys in their original order. Quick sort does not guarantee this, meaning that equal elements might not maintain their original order post-sort, which can be problematic in certain applications.
-
Inconsistent Performance: The performance of quick sort can vary unpredictably depending on the initial order of elements in the array. This inconsistency can be a drawback when consistent performance is necessary, such as in real-time applications.
- Complexity in Parallel Implementation: While quick sort can be parallelized, doing so effectively involves complex coordination that can introduce overhead. The need to balance load effectively across multiple processors can complicate the implementation and might not always lead to proportional performance gains.
These disadvantages highlight that while quick sort is a powerful sorting tool, it may not always be the best choice depending on the specific requirements and constraints of the application.
Frequently Asked Questions
Why is quick sort faster than other sorting algorithms?
Quick sort is generally faster because it uses a divide-and-conquer strategy to sort different parts of the array independently. This allows it to efficiently manage large datasets with an average complexity of O(nlogn)
Can quick sort be used for linked lists?
Yes, quick sort can be adapted for linked lists. The algorithm requires some modifications, such as using pointers for elements instead of array indices, but it maintains its efficiency and performance characteristics.
How can you prevent the worst-case time complexity in quick sort?
To avoid the worst-case scenario ofO(n2) it's advisable to use techniques like choosing a random pivot or the median-of-three method, which selects the median of the first, middle, and last elements as the pivot. These strategies help ensure more balanced partitions.
Conclusion
In this article, we have learned about quick sort, an efficient sorting algorithm known for itsO(nlogn) average time complexity. We talked about the key steps involved in quick sort, including pivot selection, partitioning, and recursive sorting. Despite its potential disadvantages, such as sensitivity to pivot selection and worst-case performance, quick sort remains a preferred choice due to its overall efficiency, especially in handling large datasets.
You can refer to our guided paths on the Coding Ninjas. You can check our course to learn more about DSA, DBMS, Competitive Programming, Python, Java, JavaScript, etc. Also, check out some of the Guided Paths on topics such as Data Structure andAlgorithms, Competitive Programming, Operating Systems, Computer Networks, DBMS, System Design, etc., as well as some Contests, Test Series, and Interview Experiences curated by top Industry Experts.