Quicksort: Difference between revisions

Content deleted Content added
Fix logarithm notation: use subscript instead of superscript for log base
Tag: Reverted
Reverted good faith edits by 79.161.0.121 (talk): Log squared is correct here
Line 206:
 
==== Parallelization ====
Quicksort's divide-and-conquer formulation makes it amenable to [[parallel algorithm|parallelization]] using [[task parallelism]]. The partitioning step is accomplished through the use of a [[prefix sum|parallel prefix sum]] algorithm to compute an index for each array element in its section of the partitioned array.<ref>Umut A. Acar, Guy E Blelloch, Margaret Reid-Miller, and Kanat Tangwongsan, [https://www.cs.cmu.edu/afs/cs/academic/class/15210-s13/www/lectures/lecture19.pdf Quicksort and Sorting Lower Bounds], ''Parallel and Sequential Data Structures and Algorithms''. 2013.</ref><ref>{{Cite journal | title=Quicksort Partition via Prefix Scan | year=2012 | journal=Dr. Dobb's | last=Breshears | first=Clay | url=http://www.drdobbs.com/parallel/quicksort-partition-via-prefix-scan/240003109}}</ref> Given an array of size {{mvar|n}}, the partitioning step performs {{math|O(''n'')}} work in {{math|''O''(log ''n'')}} time and requires {{math|O(''n'')}} additional scratch space. After the array has been partitioned, the two partitions can be sorted recursively in parallel. Assuming an ideal choice of pivots, parallel quicksort sorts an array of size {{mvar|n}} in {{math|O(''n'' log ''n'')}} work in {{math|O(log<subsup>2</subsup> ''n'')}} time using {{math|O(''n'')}} additional space.
 
Quicksort has some disadvantages when compared to alternative sorting algorithms, like [[merge sort]], which complicate its efficient parallelization. The depth of quicksort's divide-and-conquer tree directly impacts the algorithm's scalability, and this depth is highly dependent on the algorithm's choice of pivot. Additionally, it is difficult to parallelize the partitioning step efficiently in-place. The use of scratch space simplifies the partitioning step, but increases the algorithm's memory footprint and constant overheads.
Line 376:
 
=== Generalization ===
[[Richard J. Cole|Richard Cole]] and David C. Kandathil, in 2004, discovered a one-parameter family of sorting algorithms, called partition sorts, which on average (with all input orderings equally likely) perform at most <math>n\log n + {O}(n)</math> comparisons (close to the information theoretic lower bound) and <math>{\Theta}(n\log n)</math> operations; at worst they perform <math>{\Theta}(n\log_2log^2 n)</math> comparisons (and also operations); these are in-place, requiring only additional <math>{O}(\log n)</math> space. Practical efficiency and smaller variance in performance were demonstrated against optimized quicksorts (of [[Robert Sedgewick (computer scientist)|Sedgewick]] and [[Jon Bentley (computer scientist)|Bentley]]-[[Douglas McIlroy|McIlroy]]).<ref>Richard Cole, David C. Kandathil: [http://www.cs.nyu.edu/cole/papers/part-sort.pdf "The average case analysis of Partition sorts"], European Symposium on Algorithms, 14–17 September 2004, Bergen, Norway. Published: ''Lecture Notes in Computer Science'' 3221, Springer Verlag, pp. 240–251.</ref>
 
== See also ==