Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

 

Sorting algorithm

In computer science and mathematics, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most used orders are numerical order and lexicographical order. Efficient sorting is important to optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output.

Contents

Classification

Sorting algorithms used in computer science are often classified by:

  • computational complexity (worst, average and best behaviour) in terms of the size of the list (n). Typically, good behaviour is O(n log n) and bad behaviour is Ω(n2). Ideal behaviour for a sort is O(n). Sort algorithms which only use an abstract key comparison operation always need at least Ω(n log n) comparisons on average;
  • memory usage (and use of other computer resources)
  • stability: stable sorting algorithms maintain the relative order of records with equal keys. That is, a sorting algorithm is stable if whenever there are two records R and S with the same key and with R appearing before S in the original list, R will appear before S in the sorted list.

When equal elements are indistinguishable, such as with integers, stability is not an issue. However, assume that the following pairs of numbers are to be sorted by their first coordinate:

(4, 1)  (3, 1)  (3, 7)  (5, 6)

In this case, two different results are possible, one which maintains the relative order of records with equal keys, and one which does not:

(3, 1)  (3, 7)  (4, 1)  (5, 6)   (order maintained)
(3, 7)  (3, 1)  (4, 1)  (5, 6)   (order changed)

Unstable sorting algorithms may change the relative order of records with equal keys, stable sorting algorithms never do so. Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original data order as a tie-breaker. Remembering this order, however, often involves an additional space penalty.

List of sorting algorithms

In this table, n is the number of records to be sorted, k is the number of distinct keys, and u is the number of unique records.

Stable

Unstable

Impractical sort algorithms

  • Bogosort — O(n × n!) expected time, unbounded worst case.
  • Stupid sort — O(n3); recursive version requires O(n2) extra memory
  • Bead Sort — O(n) or O(√n), but requires specialized hardware
  • Pancake sorting — O(n), but requires specialized hardware

Summaries of the popular sorting algorithms

Bubble sort

Bubble sort is the most straightforward and simplistic method of sorting data that could actually be considered for real world use. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them, then repeats until no swaps have occurred on the last pass. The algorithm does this for each pair of adjacent elements until there are no more pairs to compare. This algorithm, however, is vastly inefficient, and is rarely used except in education (i.e., beginning programming classes). A slightly better variant is generally called cocktail sort, and works by inverting the ordering criteria and the pass direction on alternating passes.

Insertion sort

Insertion sort is similar to bubble sort, but is more efficient as it reduces element comparisons somewhat with each pass. An element is compared to all the prior elements until a lesser element is found. In other words, if an element contains a value less than all the previous elements, it compares the element to all the previous elements before going on to the next comparison. Although this algorithm is more efficient than the Bubble sort, it is still inefficient compared to many other sort algorithms since it, and bubble sort, move elements only one position at a time. However, insertion sort is a good choice for small lists (about 30 elements or fewer), and for nearly-sorted lists. These observations can be combined to create a variant of insertion sort which works efficiently for larger lists. This variant is called shell sort (see below).

Shell sort

Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array (in reality, the array is an appropriately indexed one dimensional array) and then sorting the columns of the array using the Insertion sort method. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory.

See in-place algorithm for a list of sorting algorithms that can be written to work in-place.

Merge sort

Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e. 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list.

Heapsort

Heapsort is a member of the family of selection sorts. This family of algorithms works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list. Straight selection sort runs in O(n2) time, but Heapsort accomplishes its task efficiently by using a data structure called a heap, which is a binary tree where each parent is larger than either of its children. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. It is removed and placed at the end of the list, then the remaining list is "heapified" again.

Radix sort

Some radix sort algorithms are counterintuitive, but they can be surprisingly efficient. If we take the list to be sorted as a list of binary strings, we can sort them on the least significant bit, preserving their relative order. This "bitwise" sort must be stable, otherwise the algorithm will not work. Then we sort them on the next bit, and so on from right to left, and the list will end up sorted. This is most efficient on a binary computer (which nearly all computers are). If we had a ternary computer, we would regard the list as a list of base 3 strings and proceed in the same way. Most often, the bucket sort algorithm is used to accomplish the bitwise sorting.

Radix sort can also be accomplished from left to right, but this makes the algorithm recursive . On a binary (radix 2) computer, we would have to sort on the leftmost bit, and then sort the sublist with 0 as the leftmost bit, and then sort the sublist with 1 as the leftmost bit, and so on.

Graphical representations

Microsoft's "Quick" programming languages (such as QuickBASIC and QuickPascal ) have a file named "sortdemo" (with extension BAS and PAS for QB and QP, respectively) in the examples folder that provides a graphical representation of several of the various sort procedures described here, as well as performance ratings of each.

Also, a program by Robb Cutler called The Sorter for classic Mac OS performs a similar function. It illustrates Quick Sort, Merge Sort, Heap Sort, Shell Sort, Insertion Sort, Bubble Sort, Shaker Sort, Bin Sort, and Selection Sort.

See also

External links and references

The contents of this article are licensed from Wikipedia.org under the GNU Free Documentation License. How to see transparent copy