Online Encyclopedia Search Tool

Your Online Encyclopedia

 

Online Encylopedia and Dictionary Research Site

Online Encyclopedia Free Search Online Encyclopedia Search    Online Encyclopedia Browse    welcome to our free dictionary for your research of every kind

Online Encyclopedia



Quicksort

Quicksort is a well-known sorting algorithm developed by C. A. R. Hoare that, on average, needs Θ(n log n) comparisons to sort n items, while requiring Θ(n2) comparisons in the worst-case.

Quicksort's inner loop is such that it is usually easy to implement very efficiently on most computer architectures, which makes it significantly faster in practice than other Θ(n log n) algorithms that can sort in place or nearly so in the average case (recursively-implemented quicksort is not, as is sometimes regarded, an in-place algorithm, requiring Θ(log n) on average, and Θ(n) in the worst case, of stack space for recursion.)

Contents

Performance and algorithm details

Because of its good average performance and simple implementation, Quicksort is one of the most popular sorting algorithms in use. It is an unstable sort in that it doesn't preserve any ordering that is already between elements of the same value. Quicksort's worst-case performance is Θ(n2); much worse than some other sorting algorithms such as heapsort or merge sort. However, if pivots are chosen randomly, most bad choices of pivots are unlikely; the worst-case has only probability 1/n! of occurring.

The Quicksort algorithm uses a recursive divide and conquer strategy to sort a list. The steps are:

  • Pick a pivot element from the list.
  • Reorder the list so that all elements less than the pivot precede all elements greater than the pivot. This means that the pivot is in its final place; the algorithm puts at least one element in its final place on each pass over the list. This step is commonly referred to as "partitioning".
  • Recursively sort the sub-list of elements less than the pivot and the sub-list of elements greater than the pivot. If one of the sub-lists is empty or contains one element, it can be ignored.

In pseudocode, the complete algorithm in its simplest form is:

The following is wikicode, a proposed pseudocode for articles:

  function partition(a, left, right, pivotIndex)
      pivotValue := a[pivotIndex]
      swap(a[pivotIndex], a[right]) // Move pivot to end
      storeIndex := left
      for i from left to right-1
          if a[i] <= pivotValue
              swap(a[storeIndex], a[i])
              storeIndex := storeIndex + 1
      swap(a[right], a[storeIndex]) // Move pivot to its final place
      return storeIndex
  
  function quicksort(a, left, right)
      if right > left
          select a pivot value a[pivotIndex]
          pivotNewIndex := partition(a, left, right, pivotIndex)
          quicksort(a, left, pivotNewIndex-1)
          quicksort(a, pivotNewIndex+1, right)

The inner loop which performs the partition is often very amenable to optimization as all comparisons are being done with a single pivot element. This is one reason why Quicksort is often the fastest sorting algorithm, at least on average over all inputs.

The most crucial problem of Quicksort is the choice of pivot element. A naïve implementation of Quicksort, like the ones below, will be terribly inefficient for certain inputs. For example, if the pivot always turns out to be the smallest element in the list, then Quicksort degenerates to Selection sort with Θ(n2) running time. A secondary problem is the recursion depth. This becomes linear, and the stack requires Θ(n) extra space.

Choosing a better pivot

The worst-case behavior of quicksort is not merely a theoretical problem. When quicksort is used in web services, for example, it is possible for an attacker to deliberately exploit the worst case performance and choose data which will cause a slow running time or maximize the chance of running out of stack space. See competitive analysis for more discussion of this issue.

Sorted or partially sorted data is quite common in practice and the naïve implementation which selects the first element as the pivot does poorly with such data. To avoid this problem the middle element can be used. This works well in practice but attacks can cause worst-case performance.

A better optimization can be to select the median element of the first, middle and last elements as the pivot. Adding two randomly selected elements resists chosen data attacks, more so if a cryptographically sound random number generator is used to reduce the chance of an attacker predicting the "random" elements. The use of the fixed elements reduces the chance of bad luck causing a poor pivot selection for partially sorted data when not under attack. These steps increase overhead, so it may be worth skipping them once the partitions grow small and the penalty for poor pivot selection drops.

Finding the true median value and using it as the pivot can be done if the number of elements is large enough to make it necessary but this is seldom done in practice.

Other optimizations

Another optimization is to switch to a different sorting algorithm once the list becomes small, perhaps ten or less elements. Selection sort might be inefficient for large data sets, but it is often faster than Quicksort on small lists.

One widely used implementation of quicksort, that in the 1997 Microsoft C library, used a cutoff of 8 elements before switching to insertion sort, asserting that testing had shown that to be a good choice. It used the middle element for the partition value, asserting that testing had shown that the median of three algorithm did not, in general, increase performance.

In datasets which contain a lot of equal elements, quicksort can degenerate to worst case time complexity in sorting the 'bottom tier' of partitions. A good variation in such cases is to test separately for equal elements and store these in a 'fat pivot' in the center of the partition. An implementation of this variation implemented in C is shown below.

Sedgewick (1978) suggested an enhancement to the use of simple sorts for small numbers of elements, which reduced the number of instructions required by postponing the simple sorts until the quicksort had finished, then running an insertion sort over the whole array.

LaMarca and Ladner (1997) consider "The Influence of Caches on the Performance of Sorting", a very significant issue in microprocessor systems with multi-level caches and high cache miss times. They conclude that while the Sedgewick optimization decreases the number of instructions, it also decreases locality of cache references and worsens performance compared to doing the simple sort when the need for it is first encountered. However, the effect was not dramatic and they suggested that it was starting to become more significant with more than 4 million 64 bit float elements. This work is cited by Musser, following. Their work showed greater improvements for other sorting types.

Because recursion requires additional memory, Quicksort has been implemented in a non-recursive, iterative form. This has the advantage of predictable memory use regardless of input, and the disadvantage of considerably greater code complexity. Those considering iterative implementations of Quicksort would do well to also consider Introsort or especially Heapsort.

A simple alternative for reducing Quicksort's memory consumption uses true recursion only on the smaller of the two sublists and tail recursion on the larger. This limits the additional storage of Quicksort to O(log n). The procedure quicksort in the preceding pseudocode would be rewritten as

  function quicksort(a, left, right)
      while right > left
          select a pivot value a[pivotIndex]
          pivotNewIndex := partition(a, left, right, pivotIndex)
          if (pivotNewIndex-1) - left < right - (pivotNewIndex+1)
              quicksort(a, left, pivotNewIndex-1)
              left  := pivotNewIndex+1
          else
              quicksort(a, pivotNewIndex+1, right)
              right := pivotNewIndex-1

Introsort optimization

An optimization of quicksort which is becoming widely used is introspective sort, often called introsort (Musser 1997). This starts with quicksort and switches to heapsort when the recursion depth exceeds a preset value. This overcomes the overhead of increasingly complex pivot selection techniques while ensuring O(n log n) worst-case performance. Musser reported that on a median-of-3 killer sequence of 100,000 elements running time was 1/200th that of median-of-3 quicksort. Musser also considered the effect of Sedgewick delayed small sorting on caches, reporting that it could double the number of cache misses but that its performance with double-ended queues was significantly better and it should be retained for template libraries, in part because the gain in other cases from doing the sorts immediately was not great.

The June 2000 SGI C++ Standard Template Library stl_algo.c implementation of unstable sort uses the Musser introsort approach with the recursion depth to switch to heapsort passed as a parameter, median-of-3 pivot selection and the Sedgewick final insertion sort pass. The element threshold for switching to the simple insertion sort was 16.

The C++ STL implementations generally significantly (several times as fast) outperform the C implementation because they are implemented to allow inlining, while the generic C equivalent must use function calls for comparisons. This advantage could be compensated for by using custom versions of the sort function, at the cost of losing the advantage of a totally generic library function.

Competitive sorting algorithms

The most direct competitor of Quicksort is heapsort. Heapsort is typically somewhat slower than Quicksort, but the worst-case running time is always O(n log n). Quicksort is usually faster, though there remains the chance of worst case performance except in the introsort variant. If it's known in advance that heapsort is going to be necessary, using it directly will be faster than waiting for introsort to switch to it. Heapsort also has the important advantage of using only constant additional space (heapsort is in-place), whereas even the best variant of Quicksort uses O(log n) space.

Quicksort is a space-optimized version of the binary tree sort. Instead of inserting items sequentially into an explicit tree, Quicksort organizes them concurrently into a tree that is implied by the recursive calls. The algorithms make exactly the same comparisons, but in a different order.

Relationship to selection

A simple selection algorithm, which chooses the kth smallest of a list of elements, works nearly the same as quicksort, except instead of recursing on both sublists, it only recurses on the sublist which contains the desired element. This small change changes the average complexity to linear or O(n) time. A variation on this algorithm brings the worst-case time down to O(n) (see selection algorithm for more information).

Conversely, once we know a worst-case O(n) selection algorithm is available, we can use it to find the ideal pivot (the median) at every step of Quicksort, producing a variant with worst-case O(n log n) running time. This variant is considerably slower on average, however.

Sample implementations

Sample quicksort implementations in various languages, sorted by number of non-comment lines of code. Samples are written in a non-contrived style, characteristic of the respective languages.

J

   sort =: ]`(($:@: ((}.<:{.)#}.)) 
              ,{., 
             ($:@: ((}.> {.)#}.)))  @. (*@#)

Joy


 DEFINE sort == [small][]
                [uncons [>] split]
                [[swap] dip cons concat] binrec .

Miranda

   sort []           = []  
   sort (pivot:rest) = sort [ y | y <- rest; y <= pivot ]  
                        ++ [pivot] ++
                       sort [ y | y <- rest; y >  pivot ]

NGL

  sort ()          == id
  sort pivot,,rest == ( self : rest[rest <= pivot] )
                       , pivot , 
                      ( self : rest[rest >  pivot] )

Haskell

The following Haskell code is almost self explanatory but can suffer from inefficiencies because it crawls through the list "rest" twice, once for each list comprehension. A smart implementation can perform optimizations to prevent this inefficiency, but these are not required by the language.

  sort :: (Ord a)   => [a] -> [a]
  
  sort []           = []
  sort (pivot:rest) = sort [y | y <- rest, y < pivot]
                      ++ [pivot] ++ 
                      sort [y | y <- rest, y >=pivot]

Erlang

The following Erlang code sorts lists of items of any type via a user-provided Cmp comparison function:

%% sort(Array,Cmp)

  sort ([],_)             -> [];
  sort ([Pivot|Rest],Cmp) -> sort([ Y || Y <- Rest, Cmp(Y,Pivot) ], Cmp)
                              ++ [Pivot] ++
                             sort([ Y || Y <- Rest, Cmp(Pivot,Y) ], Cmp).

OCaml

''val sort : 'a list -> 'a list = <fun>''

# let rec sort array = match array with
     []              -> []
     | pivot::rest   -> let left,right = List.partition (function x -> x < pivot) rest
                        in (sort left) @ pivot::(sort right);;

Common Lisp

(defun partition (fun array)
  (list (remove-if-not fun array) (remove-if fun array)))
 
(defun sort (array)
  (if (null array) nil
    (let ((part (partition (lambda (x) (< x (car array))) (cdr array))))
      (append (sort (car part)) (cons (car array) (sort (cadr part)))))))

Ruby

def sort(array)
  return [] if array.empty?
  pivot = array[0]
  left, right = array[1..-1].partition { |y| y <= pivot }.map { |a| sort(a) }
  left + [ pivot ] + right
end

C++

#include <algorithm>
#include <iterator>
#include <functional>

template <typename T>
void sort(T begin, T end) {
    if (begin != end) {
        T middle = partition (begin, end, bind2nd(less<iterator_traits<T>::value_type>(), *begin));
        sort (begin, middle);
        sort (max(begin + 1, middle), end);
    }
}

Python

The following Python implementation uses a more efficient partitioning strategy:

def partition(array, begin, end, cmp):
    while begin < end:
         while begin < end:
            if cmp(array[begin], array[end]):
                (array[begin], array[end]) = (array[end], array[begin])
                break
            end -= 1
         while begin < end:
            if cmp(array[begin], array[end]):
                (array[begin], array[end]) = (array[end], array[begin])
                break
            begin += 1
    return begin

def sort(array, cmp=lambda x, y: x > y, begin=None, end=None):
    if begin is None: begin = 0
    if end   is None: end   = len(array)
    if begin < end:
        i = partition(array, begin, end-1, cmp)
        sort(array, cmp, begin, i)
        sort(array, cmp, i+1, end)

AppleScript

This is a straightforward implementation. It is certainly possible to come up with a more efficient one, but it will probably not be as clear as this one:

 on sort( array, left, right )
     set i to left
     set j to right
     set v to item ( ( left + right ) div 2 ) of array -- pivot
     repeat while ( j > i )
         repeat while ( item i of array < v )
             set i to i + 1
         end repeat
         repeat while ( item j of array > v )
             set j to j - 1
         end repeat
         if ( not i > j ) then
             tell array to set { item i, item j } to { item j, item i } -- swap
             set i to i + 1
             set j to j - 1
         end if
     end repeat 
     if ( left  < j ) then sort( array, left, j  )
     if ( right > i ) then sort( array, i, right )
 end sort

C

This implementation is limited to arrays of integers:

void sort(int array[], int begin, int end) {
   if (end - begin > 1) {
      int pivot = array[begin];
      int l = begin + 1;
      int r = end;
      while(l < r) {
         if (array[l] <= pivot) {
            l++;
         } else {
            r--;
            swap(array[l], array[r]); 
         }
      }
      l--;
      swap(array[begin], array[l]);
      sort(array, begin, l);
      sort(array, r, end);
   }
}

The following implementation uses a 'fat pivot':

void sort(int array[], int begin, int end) {
   int pivot = array[begin];
   int i = begin + 1, j = end, k = end;
   int t;

   while (i < j) {
      if (array[i] < pivot) i++;
      else if (array[i] > pivot) {
         j--; k--;
         t = array[i];
         array[i] = array[j];
         array[j] = array[k];
         array[k] = t; }
      else {
         j--;
         swap(array[i], array[j]);
   }  }
   i--;
   swap(array[begin], array[i]);        
   if (i - begin > 1)
      sort(array, begin, i);
   if (end - k   > 1)
      sort(array, k, end);                      
}

Java

The following Java implementation uses a randomly selected pivot. Analogously to the Erlang solution above, a user-supplied Comparator determines the partial ordering of array elements:

import java.util.Comparator;
import java.util.Random;

public class Quicksort {
    public static final Random RND = new Random();      
    private void swap(Object[] array, int i, int j) {
        Object tmp = array[i];
        array[i] = array[j];
        array[j] = tmp;
    }
    private int partition(Object[] array, int begin, int end, Comparator cmp) {
        int index = begin + RND.nextInt(end - begin + 1);
        Object pivot = array[index];
        swap(array, index, end);        
        for (int i = index = begin; i < end; ++ i) {
            if (cmp.compare(array[i], pivot) <= 0) {
                swap(array, index++, i);
        }   }
        swap(array, index, end);        
        return (index);
    }
    private void qsort(Object[] array, int begin, int end, Comparator cmp) {
        if (end > begin) {
            int index = partition(array, begin, end, cmp);
            qsort(array, begin, index - 1, cmp);
            qsort(array, index + 1,  end,  cmp);
    }   }
    public void sort(Object[] array, Comparator cmp) {
        qsort(array, 0, array.length - 1, cmp);
}   }

C#

The following C# implementation uses a random pivot, while limited to integer arrays (for other value type replace all instances of int[] by corresponding one e.g. decimal[]. For object[] comparison create a delegate to your custom object compare function and pass it as an added parameter to both methods):

class Quicksort {
        private void swap(int[] Array, int Left, int Right) {
                int temp = Array[Left];
                Array[Right] = Array[Left];
                Array[Left] = temp;
        }

        public void sort(int[] Array, int Left, int Right) {
                int LHold = Left;
                int RHold = Right;
                Random ObjRan = new Random();
                int    Pivot  = ObjRan.Next(Left,Right);
                swap(Array,Pivot,Left);
                Pivot = Left;
                Left++;

                while (Right >= Left) {
                        if (Array[Left] >= Array[Pivot] && Array[Right] < Array[Pivot])
                                swap(Array, Left, Right);
                        else if (Array[Left] >= Array[Pivot])
                                Right--;
                        else if (Array[Right] < Array[Pivot])
                                Left++;
                        else {
                                Right--;
                                Left++;
                }       }       
                swap(Array, Pivot, Right);
                Pivot = Right;  
                if (Pivot > LHold)
                        sort(Array, LHold,   Pivot);
                if (RHold > Pivot+1)
                        sort(Array, Pivot+1, RHold);
}       }

Example of QuickSort using delegates. Pass the array of objects in the constructor of the class. For comparing other type of objects rewrite your own compare function instead of CompareInt.

class QuickSort {
        private delegate int CmpOp(object Left, object Right);
        private void swap(object[] Array, int Left, int Right, CmpOp Cmp) {
                        object tempObj = Array[Left];
                        Array[Left]    = Array[Right];
                        Array[Right]   = tempObj;
        }
        private int CmpInt(object Left, object Right) {
                if ((int) Left < (int) Right)
                        return -1;
                else 
                        return -2;
        }
        public QuickSort(object[] Array) {
                CmpOp Cmp = new CmpOp(CmpInt);
                Sort(Array, 0, Array.Length-1, Cmp);                    
        }
        private void Sort(object[] Array, int Left, int Right, CmpOp Cmp) {
                int LHold = Left;
                int RHold = Right;
                Random ObjRan = new Random();
                int Pivot = ObjRan.Next(Left,Right);
                swap(Array, Pivot, Left, Cmp);
                Pivot = Left;
                Left++;

                while (Right >= Left) {
                        if (Cmp(Array[Left], Array[Pivot])!= -1 && Cmp(Array[Right], ArrObj[Pivot])== -1)
                                swap(Array, Left, Right, Cmp);
                        else if (Cmp(Array[Left], Array[Pivot]) != -1)
                                Right--;
                        else if (Cmp(Array[Right],Array[Pivot]) == -1)
                                Left++;
                        else {
                                Right--;
                                Left++;
                }       }       
                swap(Array, Pivot, Right, Cmp);
                Pivot = Right;

                if (Pivot > LHold)
                        Sort(Array, LHold,  Pivot, Cmp);
                if (RHold > Pivot+1)
                        Sort(Array, Pivot+1,RHold, Cmp);
}       }

ARM assembly language

This ARM RISC assembly language implementation for sorting an array of 32-bit integers demonstrates how well quicksort takes advantage of the register model and capabilities of a typical machine instruction set (note that this particular implementation does not meet standard calling conventions and may use more than O(log n) space):

  qsort:  @ Takes three parameters:
        @   a:     Pointer to base of array a to be sorted (arrives in r0)
        @   left:  First of the range of indexes to sort (arrives in r1)
        @   right: One past last of range of indexes to sort (arrives in r2)
        @ This function destroys: r1, r2, r3, r4, r5, r7
        stmfd   sp!, {r4, r6, lr}     @ Save r4 and r6 for caller
        mov     r6, r2                @ r6 <- right
  qsort_tailcall_entry:
        sub     r7, r6, r1            @ If right - left <= 1 (already sorted),
        cmp     r7, #1
        ldmlefd sp!, {r1, r6, pc}     @ Return, moving r4->r1, restoring r6
        ldr     r7, [r0, r1, asl #2]  @ r7 <- a[left], gets pivot element
        add     r2, r1, #1            @ l <- left + 1
        mov     r4, r6                @ r <- right
  partition_loop:
        ldr     r3, [r0, r2, asl #2]  @ r3 <- a[l]
        cmp     r3, r7                @ If a[l] <= pivot_element,
        addle   r2, r2, #1            @ ... increment l, and
        ble     partition_test        @ ... continue to next iteration.
        sub     r4, r4, #1            @ Otherwise, decrement r,
        ldr     r5, [r0, r4, asl #2]  @ ... and swap a[l] and a[r].
        str     r5, [r0, r2, asl #2]
        str     r3, [r0, r4, asl #2]
  partition_test:
        cmp     r2, r4                @ If l < r,
        blt     partition_loop        @ ... continue iterating.
  partition_finish:
        sub     r2, r2, #1            @ Decrement l
        ldr     r3, [r0, r2, asl #2]  @ Swap a[l] and pivot
        str     r3, [r0, r1, asl #2]
        str     r7, [r0, r2, asl #2]
        bl      qsort                 @ Call self recursively on left part,
                                      @  with args a (r0), left (r1), r (r2),
                                      @  also preserves r6 and
                                      @  moves r4 (l) to 2nd arg register (r1)
        b       qsort_tailcall_entry  @ Tail-call self on right part,
                                      @  with args a (r0), l (r1), right (r6)

The call produces 3 words of stack per recursive call and is able to take advantage of its knowledge of its own behavior. A more efficient implementation would sort small ranges by a more efficient method. If an implementation obeying standard calling conventions were needed, a simple wrapper could be written for the initial call to the above function that saves the appropriate registers.

External links

References

  • Hoare, C. A. R. "Partition: Algorithm 63," "Quicksort: Algorithm 64," and "Find: Algorithm 65." Comm. ACM 4, 321-322, 1961
  • R. Sedgewick. Implementing quicksort programs, Communications of the ACM, 21(10):847857, 1978.
  • David Musser. Introspective Sorting and Selection Algorithms, Software Practice and Experience vol 27, number 8, pages 983-993, 1997
  • A. LaMarca and R. E. Ladner. "The Influence of Caches on the Performance of Sorting." Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, 1997. pp. 370-379.


Last updated: 10-24-2004 05:10:45