Content deleted Content added
Baongoc124 (talk | contribs) Added "Therefore" to clarify why the recursion ranges after Hoare partition is different than Lomuto's. |
m →Parallelization: Added middle initials to correctly distinguish author as per citation (and [[]] red-coding) - there are multiple 'David Powers' active in IT. Spelling/grammar/punctuation/typographical correction |
||
Line 211:
Quicksort has some disadvantages when compared to alternative sorting algorithms, like [[merge sort]], which complicate its efficient parallelization. The depth of quicksort's divide-and-conquer tree directly impacts the algorithm's scalability, and this depth is highly dependent on the algorithm's choice of pivot. Additionally, it is difficult to parallelize the partitioning step efficiently in-place. The use of scratch space simplifies the partitioning step, but increases the algorithm's memory footprint and constant overheads.
Other more sophisticated parallel sorting algorithms can achieve even better time bounds.<ref>{{cite book | first1=Russ | last1=Miller | first2=Laurence | last2=Boxer | title=Algorithms sequential & parallel: a unified approach | url=https://books.google.com/books?id=dZoZAQAAIAAJ| year=2000 | publisher=Prentice Hall | isbn=978-0-13-086373-7}}</ref> For example, in 1991 [[David M W Powers]] described a parallelized quicksort (and a related [[radix sort]]) that can operate in {{math|''O''(log ''n'')}} time on a [[Parallel random-access machine#Read/write conflicts|CRCW]] (concurrent read and concurrent write) [[Parallel Random Access Machine|PRAM]] (parallel random-access machine) with {{mvar|n}} processors by performing partitioning implicitly.<ref>{{cite conference | first=David M. W. | last=Powers | title=Parallelized Quicksort and Radixsort with Optimal Speedup |conference=Proc. Int'l Conf. on Parallel Computing Technologies | year=1991 |citeseerx=10.1.1.57.9071}}</ref>
== Formal analysis ==
|