Non-blocking algorithm: Difference between revisions

Content deleted Content added
Radagast83 (talk | contribs)
mNo edit summary
resolved merge tags, updated reference markup
Line 1:
In [[computer science]], '''non-blocking synchronization''' ensures that [[thread (software engineering)|thread]]s competing for a shared [[resource (computer science)|resource]] do not have their [[execution (computers)|execution]] indefinitely postponed by [[mutual exclusion]]. Literature up to the turn of the century used "non-blocking" synonymously with ''lock-free'': an algorithm with guaranteed system-wide progress. However, since 2003 {{,<ref| name=obs-free}}>M. Herlihy, V. Luchangco and M. Moir. [http://www.cs.brown.edu/people/mph/HerlihyLM03/main.pdf "Obstruction-Free Synchronization: Double-Ended Queues as an Example."] 23rd International Conference on Distributed Computing Systems, 2003, p.522.</ref> the term has been weakened to only prevent progress-blocking interactions with a [[Computer multitasking|preemptive scheduler]].
 
In modern usage, therefore, an algorithm is ''non-blocking'' if the suspension of one or more threads will not stop the potential progress of the remaining threads. They are designed to avoid requiring a [[critical section]]. Often, these algorithms allow multiple processes to make progress on a problem without ever blocking each other. For some operations, these algorithms provide an alternative to [[locking mechanism]]s.
 
{{Mergefrom|Lock-free and wait-free algorithms|date=January 2006}}
{{Mergefrom|Non-blocking algorithm|date=March 2006}}
 
== Motivation ==
Line 26 ⟶ 23:
Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput with [[Resource starvation|starvation]]-freedom. An algorithm is wait-free if every operation has a bound on the number of steps it will take before completing.
 
It was shown in the 1980s that all algorithms can be implemented wait-free, and many transformations from serial code, called ''universal constructions'', have been demonstrated. However, the resulting performance does not in general match even naive blocking designs. It has also been shown {{<ref| name=cond-sync}}>F. Fich, D. Hendler, N. Shavit. [http://www.cs.tau.ac.il/~afek/Handler-conditionals.pdf "On the inherent weakness of conditional synchronization primitives."] 23rd Annual ACM Symposium on Principles of Distributed Computing, 2004, pp. 80-87.</ref> that the widely-available [[Linearizability|atomic]] ''conditional'' primitives, [[compare-and-swap]] and [[LL/SC]], cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads. Wait-free algorithms are therefore rare, both in research and in practice.
 
== Lock-freedom ==
Line 44 ⟶ 41:
Obstruction-freedom demands only that any partially-completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continually [[livelock|live-locking]] is the task of a contention manager.
 
Recent research {{<ref| name=polka}}>W. Scherer and M. Scott. [http://www.cs.rochester.edu/u/scott/papers/2005_PODC_CM.pdf "Advanced Contention Management for Dynamic Software Transactional Memory."] 24th annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, 2005, pp. 240-248.</ref> has yielded a promising practical contention manager, whimsically named ''Polka'', combining [[exponential backoff]] with "priority accumulation". As an operation progresses, it gains "priority"; when an operation is obstructed by another with higher priority, it will back off, with backoff intervals increasing exponentially. Each backoff increases the operation's priority; only when its priority is greater than that of its obstructor will it abort it. Aborted operations retain their former priority, giving their next attempt a greater chance of success.
 
Polka achieves good throughput in benchmarks because it minimizes both wasted effort, by prioritizing long transactions, and memory interconnect contention, using exponential backoff. This can inform other parallel algorithms, such as lock-free ones, to achieve greater throughput in the common case.
Line 51 ⟶ 48:
* [[Concurrency control]]
* [[Linearizability]]
 
==References==
<references/>
 
==External links==
Line 56:
*[http://citeseer.ist.psu.edu/cis?q=Fast+and+Practical+Non-Blocking+Concurrent+Synchronization Citations from CiteSeer]
*Discussion "[http://groups.google.com/groups?group=comp.programming.threads&threadm=c2s1qn%24mrj%247%40newsserv.zdv.uni-tuebingen.de Communication between Threads, without blocking]"
 
== Resources ==
* {{note|cond-sync}} F. Fich, D. Hendler, N. Shavit. [http://www.cs.tau.ac.il/~afek/Handler-conditionals.pdf "On the inherent weakness of conditional synchronization primitives."] 23rd Annual ACM Symposium on Principles of Distributed Computing, 2004, pp. 80-87.
* {{note|obs-free}} M. Herlihy, V. Luchangco and M. Moir. [http://www.cs.brown.edu/people/mph/HerlihyLM03/main.pdf "Obstruction-Free Synchronization: Double-Ended Queues as an Example."] 23rd International Conference on Distributed Computing Systems, 2003, p.522.
* {{note|polka}} W. Scherer and M. Scott. [http://www.cs.rochester.edu/u/scott/papers/2005_PODC_CM.pdf "Advanced Contention Management for Dynamic Software Transactional Memory."] 24th annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, 2005, pp. 240-248.
 
[[Category:Synchronization]]