Content deleted Content added
No edit summary Tag: Reverted |
Restored revision 1283058976 by 0x1F80104E (talk): Redundant |
||
(33 intermediate revisions by 28 users not shown) | |||
Line 1:
{{short
{{Infobox Algorithm
|name={{PAGENAMEBASE}}|class=[[Sorting algorithm]]
|caption=Selection sort animation
|data=[[Array data structure|Array]]
|time=<math>O(n^2)</math> comparisons, <math>O(n)</math> swaps
|best-time=<math>O(n^2)</math> comparisons, <math>O(1)</math> swap
|average-time=<math>O(n^2)</math> comparisons, <math>O(n)</math> swaps
|space=<math>O(1)</math> auxiliary
|optimal=No
|stable=No
}}
In [[computer science]], '''selection sort''' is an [[in-place algorithm|in-place]] [[comparison sort|
The algorithm divides the input list into two parts: a sorted sublist of items which is built up from left to right at the front (left) of the list and a sublist of the remaining unsorted items that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one element to the right.
Line 16 ⟶ 28:
|-
| ()
| style="text-align:right;" | (
| 11
|-
| (11)
| style="text-align:right;" | (25,
| 12
|-
| (11, 12)
| style="text-align:right;" | (
| 22
|-
| (11, 12, 22)
| style="text-align:right;" | (25, 64)
| 25
|-
| (11, 12, 22, 25)
| style="text-align:right;" | (64)
| 64
|-
| (11, 12, 22, 25, 64)
| style="text-align:right;" | ()
|
|}
Line 65 ⟶ 77:
== Implementations ==
▲{{unreferenced section|date=May 2019}}
Below is an implementation in [[C (programming language)|C]].
<syntaxhighlight lang="c" style="overflow:auto; width:auto;" line="1">
Line 93 ⟶ 104:
if (jMin != i)
{
swap(&a[i], &a[jMin]);
sction sort are both typically faster for small arrays (i.e. fewer than 10–20 elements). A useful optimization in practice for the recursive algorithms is to switch to insertion sort or selection sort for "small enough" sublists.▼
}
}
</syntaxhighlight>
== Complexity ==
Selection sort is not difficult to analyze compared to other sorting algorithms, since none of the loops depend on the data in the array. Selecting the minimum requires scanning <math>n</math> elements (taking <math>n-1</math> comparisons) and then swapping it into the first position. Finding the next lowest element requires scanning the remaining <math>n-1</math> elements (taking <math>n-2</math> comparisons) and so on. Therefore, the total number of comparisons is
: <math>(n-1)+(n-2)+\dots+1 =
\sum_{i=1}^{n-1}i</math>
By [[arithmetic progression]],
: <math>\sum_{i=1}^{n-1}i=
\frac{(n-1)+1}{2}(n-1)=
\frac{1}{2}n(n-1)=
\frac{1}{2}(n^2-n)</math>
which is of complexity <math>O(n^2)</math> in terms of number of comparisons.
== Comparison to other sorting algorithms ==
Among quadratic sorting algorithms (sorting algorithms with a simple average-case of [[Big O notation#Family of Bachmann–Landau notations|Θ(''n''<sup>2</sup>)]]), selection sort almost always outperforms [[bubble sort]] and [[gnome sort]]. [[Insertion sort]] is very similar in that after the ''k''th iteration, the first <math>k</math> elements in the array are in sorted order. Insertion sort's advantage is that it only scans as many elements as it needs in order to place the <math>k+1</math>st element, while selection sort must scan all remaining elements to find the <math>k+1</math>st element.
Simple calculation shows that insertion sort will therefore usually perform about half as many comparisons as selection sort, although it can perform just as many or far fewer depending on the order the array was in prior to sorting. It can be seen as an advantage for some [[real-time computing|real-time]] applications that selection sort will perform identically regardless of the order of the array, while insertion sort's running time can vary considerably. However, this is more often an advantage for insertion sort in that it runs much more efficiently if the array is already sorted or "close to sorted."
While selection sort is preferable to insertion sort in terms of number of writes (<math>n-1</math> swaps versus up to <math>n(n-1)/2</math> swaps, with each swap being two writes), this is roughly twice the theoretical minimum achieved by [[cycle sort]], which performs at most ''n'' writes. This can be important if writes are significantly more expensive than reads, such as with [[EEPROM]] or [[Flash memory]], where every write lessens the lifespan of the memory.
Selection sort can be implemented without unpredictable branches for the benefit of CPU [[branch predictor]]s, by finding the ___location of the minimum with branch-free code and then performing the swap unconditionally.
▲Finally, selection sort is greatly outperformed on larger
== Variants ==
[[Heapsort]] has been described as "nothing but an implementation of selection sort using the right [[data structure]]."<ref>{{cite book |first=Steven |last=Skiena |author-link=Steven Skiena |title=The Algorithm Design Manual |edition=3rd |publisher=Springer |year=2008 |page=116 |chapter=Searching and Sorting |isbn=978-3-030-54255-9 |quote=The name typically given to this algorithm, ''heapsort'', obscures the fact that the algorithm is nothing but an implementation of selection sort using the right data structure. |doi=10.1007/978-3-030-54256-6_4}}<!--DOI for chapter--></ref> It greatly improves the basic algorithm by using an [[implicit data structure|implicit]] [[heap (data structure)|heap]]
A bidirectional variant of selection sort (called '''double selection sort''' or sometimes '''cocktail sort''' due to its similarity to [[cocktail shaker sort]]) finds ''both'' the minimum and maximum values in the list in every pass. This requires three comparisons per two items (a pair of elements is compared, then the greater is compared to the maximum and the lesser is compared to the minimum) rather than regular selection sort's one comparison per item, but requires only half as many passes, a net 25% savings.
Line 113 ⟶ 154:
last := length(A) - 1;
{ The first iteration is written to look very similar to
the subsequent ones, but without swaps. } nextMax := A[last];
for i := last - 1 downto 0 do
Line 123 ⟶ 164:
while last > 0 do begin
{ Each main loop searches for the new nextMax while
swapping items equal to prevMax into place. }
prevMax := nextMax;
nextMax := A[last];
Line 163 ⟶ 206:
{{sorting}}
[[Category:Comparison sorts]]
[[Category:Articles with example C code]]
|