Library sort: Difference between revisions

Content deleted Content added
Sulhan (talk | contribs)
m Pseudocode: : remove trailing ")"
 
(13 intermediate revisions by 8 users not shown)
Line 1:
{{Short description|Sorting algorithm}}
{{refimprove|date=October 2017}}
{{Infobox Algorithm
Line 5 ⟶ 6:
|data=[[Array data structure|Array]]
|time=<math>O(n^2)</math>
|best-time=<math>O(n\log n)</math>
|average-time=<math>O(n\log n)</math>
|space=<math>O(n)</math>
|optimal=?
}}
'''Library sort''', or '''gapped insertion sort''' is a [[sorting algorithm]] that uses an [[insertion sort]], but with gaps in the array to accelerate subsequent insertions. The name comes from an analogy:
<blockquote>Suppose a librarian were to store histheir books alphabetically on a long shelf, starting with the As at the left end, and continuing to the right along the shelf with no spaces between the books until the end of the Zs. If the librarian acquired a new book that belongs to the B section, once hethey findsfind the correct space in the B section, hethey will have to move every book over, from the middle of the Bs all the way down to the Zs in order to make room for the new book. This is an insertion sort. However, if hethey were to leave a space after every letter, as long as there was still space after B, hethey would only have to move a few books to make room for the new one. This is the basic principle of the Library Sort.</blockquote>
 
The algorithm was proposed by [[Michael A. Bender]], [[Martín Farach-Colton]], and [[Miguel Mosteiro]] in 2004<ref>{{cite arxivarXiv |arxiveprint=cs/0407003 |title=Insertion Sort is O(n log n) |date=1 July 2004 |last1=Bender |first1=Michael A. |last2=Farach-Colton |first2=Martín |authorlink2=Martin Farach-Colton |last3=Mosteiro |first3=Miguel A.}}</ref> and was published in 2006.<ref name="definition">{{cite journal | journal=Theory of Computing Systems | volume=39 | issue=3 | pages=391–397 | date=June 2006 | last1=Bender | first1=Michael A. | last2=Farach-Colton | first2=Martín | authorlink2 = Martin Farach-Colton | last3=Mosteiro | first3=Miguel A. | title=Insertion Sort is O(n log n) | doi=10.1007/s00224-005-1237-z | url=http://csis.pace.edu/~mmosteiro/pub/paperToCS06.pdf | arxiv=cs/0407003 | s2cid=14701669 | access-date=2017-09-07 | archive-url=https://web.archive.org/web/20170908070035/http://csis.pace.edu/~mmosteiro/pub/paperToCS06.pdf | archive-date=2017-09-08 | url-status=dead }}</ref>
 
Like the insertion sort it is based on, library sort is a [[stable sort|stable]] [[comparison sort]] and can be run as an [[online algorithm]]; however, it was shown to have a high probability of running in O(n log n) time (comparable to [[quicksort]]), rather than an insertion sort's O(n<sup>2</sup>). The mechanism used for this improvement is very similar to that of a [[skip list]]. There is no full implementation given in the paper, nor the exact algorithms of important parts, such as insertion and rebalancing. Further information would be needed to discuss how the efficiency of library sort compares to that of other sorting methods in reality.
 
Compared to basic insertion sort, the drawback of library sort is that it requires extra space for the gaps. The amount and distribution of that space would bedepend implementationon dependentimplementation. In the paper the size of the needed array is ''(1 + ε)n'',<ref name="definition" /> but with no further recommendations on how to choose ε. Moreover, it is neither adaptive nor stable. In order to warrant the high-probability time bounds, it must randomly permute the input, which changes the relative order of equal elements and shuffles any presorted input. Also, the algorithm uses binary search to find the insertion point for each element, which does not take advantage of presorted input.
 
Another drawback is that it cannot be run as an [[online algorithm]], because it is not possible to randomly shuffle the input. If used without this shuffling, it could easily degenerate into quadratic behaviour.
One weakness of [[insertion sort]] is that it may require a high number of swap operations and be costly if memory write is expensive. Library sort may improve that somewhat in the insertion step, as fewer elements need to move to make room, but is also adding an extra cost in the rebalancing step. In addition, locality of reference will be poor compared to [[mergesort]] as each insertion from a random data set may access memory that is no longer in cache, especially with large data sets.
 
One weakness of [[insertion sort]] is that it may require a high number of swap operations and be costly if memory write is expensive. Library sort may improve that somewhat in the insertion step, as fewer elements need to move to make room, but is also addingadds an extra cost in the rebalancing step. In addition, locality of reference will be poor compared to [[mergesort]], as each insertion from a random data set may access memory that is no longer in cache, especially with large data sets.
 
==Implementation==
Line 29 ⟶ 32:
 
# '''Binary Search''': Finding the position of insertion by applying binary search within the already inserted elements. This can be done by linearly moving towards left or right side of the array if you hit an empty space in the middle element.
# '''Insertion''': Inserting the element in the position found and swapping the following elements by 1 position till an empty space is hit. This is done in logarithmic time, with high probability.
# '''Re-Balancing''': Inserting spaces between each pair of elements in the array. ThisThe takescost of rebalancing is linear time,in andthe becausenumber thereof areelements logalready ninserted. roundsAs inthese lengths increase with the algorithmpowers of 2 for each round, the total re-balancingcost takesof O(nrebalancing logis n)also time onlylinear.
 
===Pseudocode===
Line 36 ⟶ 39:
'''procedure''' rebalance(A, begin, end) '''is'''
r ← end
w ← end ÷× 2
'''while''' r ≥ begin '''do'''
A[w+1] ← gap
A[w] ← A[r]
A[w+-1] ← gap
r ← r − 1
w ← w − 2
Line 48 ⟶ 51:
S ← new array of n gaps
'''for''' i ← 1 to floor(log2(n) + -1)) '''do'''
'''for'''rebalance(S, j ← 2^i to1, 2^(i + -1) '''do''')
'''for''' insjbinarysearch2^(A[j],i-1) S,to 2^(i − 1))'''do'''
ins ← binarysearch(A[j], S, 2^i)
insert A[j] at S[ins]
 
Line 63 ⟶ 67:
{{sorting}}
 
[[Category:Sorting algorithms]]
[[Category:Comparison sorts]]
[[Category:Stable sorts]]