Non-blocking algorithm: Difference between revisions

Content deleted Content added
move important terminology to opening of article
m Enum 1 author/editor WL; WP:GenFixes on
Line 1:
{{shortShort description|Class of algorithms}}
{{distinguish|Non-blocking I/O}}
 
In [[computer science]], an [[algorithm]] is called '''non-blocking''' if failure or [[Scheduling (computing)|suspension]] of any [[thread (computing)|thread]] cannot cause failure or suspension of another thread;<ref>{{cite book|last1=Göetz|first1=Brian|last2=Peierls|first2=Tim|last3=Bloch|first3=Joshua|last4=Bowbeer|first4=Joseph|last5=Holmes|first5=David|last6=Lea|first6=Doug|title=Java concurrency in practice|date=2006|publisher=Addison-Wesley|___location=Upper Saddle River, NJ|isbn=9780321349606|page=[https://archive.org/details/javaconcurrencyi00goet/page/41 41]|url-access=registration|url=https://archive.org/details/javaconcurrencyi00goet/page/41}}</ref> for some operations, these algorithms provide a useful alternative to traditional [[lock (computer science)|blocking implementations]]. A non-blocking algorithm is '''lock-free''' if there is guaranteed system-wide [[Resource starvation|progress]], and '''wait-free''' if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.<ref name=obs-free>{{cite conference|last=Herlihy|first=M.|last2=Luchangco|first2=V.|last3=Moir|first3=M.|title=Obstruction-Free Synchronization: Double-Ended Queues as an Example|conference=23rd [[International Conference on Distributed Computing Systems]]|year=2003|pages=522|url=http://www.cs.brown.edu/people/mph/HerlihyLM03/main.pdf}}</ref>
 
The word "non-blocking" was traditionally used to describe [[telecommunications network]]s that could route a connection through a set of relays "without having to re-arrange existing calls", see [[Clos network]]. Also, if the telephone exchange "is not defective, it can always make the connection", see [[nonblocking minimal spanning switch]].
 
== Motivation ==
Line 15:
Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such as [[deadlock]], [[livelock]], and [[priority inversion]]. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities for [[parallel computing|parallelism]], and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs.
 
Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use in [[interrupt handler]]s: even though the [[Pre-emptive multitasking|preempted]] thread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock -- butlock—but this can be rectified easily by masking the interrupt request during the critical section.<ref name="monit">{{cite journal | doi = 10.1145/358818.358824 | url = http://research.microsoft.com/lampson/23-ProcessesInMesa/Abstract.html | title = Experience with Processes and Monitors in Mesa | author = Butler W. Lampson | authorlinkauthor-link = Butler W. Lampson |author2=[[David D. Redell]] |author2-link=David D. Redell | journal = Communications of the ACM | volume = 23 | issue = 2 | pages = 105–117 |date=February 1980| citeseerx = 10.1.1.142.5765 }}</ref>
 
A lock-free data structure can be used to improve performance.