Concurrent computing: Difference between revisions

Content deleted Content added
m Replace magic links with templates per local RfC and MediaWiki RfC
m WP:LINKs: needless-WP:PIPEs > WP:NOPIPEs, update-standardizes. WP:REFerence WP:CITation parameters: update-standardize, respace. Time MOS:CURRENT > specific.
 
(47 intermediate revisions by 36 users not shown)
Line 1:
{{short description|Form of computing in whichExecuting several computations are executing during overlapping time periods}}
{{Forfor multi|the American computer company|Concurrent Computer Corporation|a more theoretical discussion|Concurrency (computer science)}}
{{more citations needed|date=February 2014}}
{{Programming paradigms}}
 
'''Concurrent computing''' is a form of [[computing]] in which several [[computation]]s are executed ''[[Concurrency (computer science)|concurrently]]''—during overlapping time periods—instead of ''sequentiallysequentially—'', with one completing before the next starts.
 
This is a property of a system—whether a [[computer program|program]], [[computer]], or a [[computer network|network]]—where there is a separate execution point or "thread of control" for each process. A ''concurrent system'' is one where a computation can advance without waiting for all other computations to complete.<ref>''Operating System Concepts'' 9th edition, Abraham Silberschatz. "Chapter 4: Threads"</ref>
 
Concurrent computing is a form of [[modular programming]]. In its [[programming paradigm|paradigm]] an overall computation is [[decomposition (computer science)|factored]] into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include [[Edsger Dijkstra]], [[Per Brinch Hansen]], and [[C.A.R. Hoare]].<ref>{{Cite book |url=https://link.springer.com/book/10.1007/978-1-4757-3472-0 |title=The Origin of Concurrent Programming |year=2002 |language=en |doi=10.1007/978-1-4757-3472-0|isbn=978-1-4419-2986-0 |s2cid=44909506 |editor-last1=Hansen |editor-first1=Per Brinch }}</ref>
 
==Introduction==
Line 20 ⟶ 19:
|title=Parallelism vs. Concurrency
|work=Haskell Wiki
}}</ref> although both can be described as "multiple processes executing ''during the same period of time''". In parallel computing, execution occurs at the same physical instant: for example, on separate [[central processing unit|processors]] of a [[multi-processor]] machine, with the goal of speeding up computations—parallel computing is impossible on a ([[Multi-core processor|one-core]]) single processor, as only one computation can occur at any instant (during any single clock cycle).{{efn|This is discounting parallelism internal to a processor core, such as pipelining or vectorized instructions. A one-core, one-processor ''machine'' may be capable of some parallelism, such as with a [[coprocessor]], but the processor alone is not.}} By contrast, concurrent computing consists of process ''lifetimes'' overlapping, but execution needdoes not happen at the same instant. The goal here is to model processes in the outside world that happen concurrently, such aslike multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.<ref>{{cite book |last=Schneider |first=Fred B. |lasturl=Schneiderhttps://archive.org/details/onconcurrentprog0000schn |title=On Concurrent Programming |date=1997-05-06 |publisher=Springer |isbn=9780387949420 |date=1997url-05-06 access=registration}}</ref>{{rp|1}}
 
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via [[time-sharing]] slices: only one process runs at a time, and if it does not complete during its time slice, it is ''paused'', another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.{{citation needed|date=December 2016}}
 
Concurrent computations ''may'' be executed in parallel,<ref name=waza/><ref name="benari2006">{{cite book|last=Ben-Ari|first=Mordechai|title=Principles of Concurrent and Distributed Programming|publisher=Addison-Wesley|year=2006|edition=2nd|isbn=978-0-321-31283-9}}</ref> for example, by assigning each process to a separate processor or processor core, or [[Distributed computing|distributing]] a computation across a network. In general, however, the languages, tools, and techniques for parallel programming might not be suitable for concurrent programming, and vice versa.{{citation needed|date=December 2016}}
 
The exact timing of when tasks in a concurrent system are executed dependdepends on the [[Scheduling (computing)|scheduling]], and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:{{citation needed|date=December 2016}}
 
* T1 may be executed and finished before T2 or ''vice versa'' (serial ''and'' sequential)
Line 35 ⟶ 34:
 
===Coordinating access to shared resources===
The main challenge in designing concurrent programs is [[concurrency control]]: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.<ref name=benari2006/> Potential problems include [[Race condition#Software|race conditions]], [[Deadlock (computer science)|deadlock]]s, and [[resource starvation]]. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resource <code>balance</code>:
 
<syntaxhighlight lang="cpp" line highlight="3,5">
Line 53 ⟶ 52:
===Advantages===
{{Unreferenced section|date=December 2006}}
TheThere are advantages of concurrent computing include:
 
* Increased program throughput—parallel execution of a concurrent programalgorithm allows the number of tasks completed in a given time to increase proportionally to the number of processors according to [[Gustafson's law]].<ref>{{Cite book |last=Padua |first=David |title=Encyclopedia of Parallel Computing |publisher=Springer New York, NY |year=2011 |isbn=978-0-387-09765-7 |publication-date=September 8, 2011 |pages=819–825 |language=en}}</ref>
* High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.<ref>{{citationCitation |title=Asynchronous I/O needed|date=December2024-12-20 2016|work=Wikipedia |url=https://en.wikipedia.org/wiki/Asynchronous_I/O |access-date=2024-12-27 |language=en}}</ref>
* More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes.{{citation needed|date=DecemberFor 2016}}example [[Multiversion concurrency control|MVCC]].
 
==Models==
Models for understanding and analyzing concurrent computing systems include:
 
Introduced in 1962, [[Petri net]]s were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, and [[Dataflow architecture]]s were created to physically implement the ideas of dataflow theory. Beginning in the late 1970s, [[process calculi]] such as [[Calculus of Communicating Systems]] (CCS) and [[Communicating Sequential Processes]] (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. The [[pi calculus|π-calculus]] added the capability for reasoning about dynamic topologies.
*[[Actor model]]
 
** [[Object-capability model]] for security
*[[Inputinput/output automaton|Input/output automata]] were introduced in 1987.
 
*[[Software transactional memory]] (STM)
Logics such as Lamport's [[Temporal logic of actions|TLA+]], and mathematical models such as [[Trace theory|traces]] and [[Actor model theory|Actor event diagrams]], have also been developed to describe the behavior of concurrent systems.
*[[Petri net]]s
 
*[[Process calculus|Process calculi]] such as
[[Software transactional memory]] borrows from [[Database management system|database theory]] the concept of [[Atomic commit|atomic transactions]] and applies them to memory accesses.
**[[Ambient calculus]]
 
**[[Calculus of communicating systems]] (CCS)
===Consistency models===
**[[Communicating sequential processes]] (CSP)
{{main|Consistency model}}
**[[Join-calculus]]
Concurrent programming languages and multiprocessor programs must have a [[consistency model]] (also known as a memory model). The consistency model defines rules for how operations on [[Computer data storage|computer memory]] occur and how results are produced.
**[[π-calculus]]
 
One of the first consistency models was [[Leslie Lamport]]'s [[sequential consistency]] model. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program".<ref>{{cite journal|last=Lamport|first=Leslie|title=How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs|journal=IEEE Transactions on Computers|date=1 September 1979|volume=C-28|issue=9|pages=690–691|doi=10.1109/TC.1979.1675439|s2cid=5679366}}</ref>
 
{{see also|Relaxed sequential}}
 
==Implementation==
Line 83 ⟶ 85:
;Shared memory communication: Concurrent components communicate by altering the contents of [[shared memory (interprocess communication)|shared memory]] locations (exemplified by [[Java (programming language)|Java]] and [[C Sharp (programming language)|C#]]). This style of concurrent programming usually needs the use of some form of locking (e.g., [[Mutual exclusion|mutexes]], [[Semaphore (programming)|semaphores]], or [[Monitor (synchronization)|monitors]]) to coordinate between threads. A program that properly implements any of these is said to be [[Thread safety|thread-safe]].
 
;Message passing communication: Concurrent components communicate by [[message passing|exchanging messages]] (exchanging messages, exemplified by [[Open MPI|MPI]], [[Go (programming language)|Go]], [[Scala (programming language)|Scala]], [[Erlang (programming language)|Erlang]] and [[occam (programming language)|occam]]). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming.{{Citation needed|date=May 2013}} A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the [[actor model]], and various [[process calculi]]. Message passing can be efficiently implemented via [[symmetric multiprocessing]], with or without shared memory [[cache coherence]].
 
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
Line 90 ⟶ 92:
Concurrent computing developed out of earlier work on railroads and [[telegraphy]], from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via [[time-division multiplexing]] (1870s).
 
The academic study of concurrent algorithms started in the 1960s, with {{Harvtxt|Dijkstra|1965}} credited with being the first paper in this field, identifying and solving [[mutual exclusion]].<ref>{{CitationCite |report |url=http://www.podc.org/influential/2002.html | title=PODC Influential Paper Award: 2002 | work=ACM Symposium on Principles of Distributed Computing | accessdateaccess-date=2009-08-24}}</ref>
 
==Prevalence==
Line 109 ⟶ 111:
 
=={{anchor|Concurrent programming languages|Languages supporting concurrent programming}}Languages supporting concurrent programming==
<!-- This section is linked from [[occam (programming language)]] and [[COPL]] -->
[[List of concurrent programming languages|Concurrent programming languages]] are programming languages that use language constructs for [[concurrency (computer science)|concurrency]]. These constructs may involve [[Thread (computer science)|multi-threading]], support for [[distributed computing]], [[message passing programming|message passing]], [[sharing|shared resources]] (including [[Parallel Random Access Machine|shared memory]]) or [[futures and promises]]. Such languages are sometimes described as ''concurrency-oriented languages'' or ''concurrency-oriented programming languages'' (COPL).<ref name="armstrong2003">{{cite web|title=Making reliable distributed systems in the presence of software errors|last1=Armstrong|first1=Joe|year=2003|url=http://www.diva-portal.org/smash/get/diva2:9492/FULLTEXT01.pdf}}</ref>
 
* <!-- This section is linked from [[occam (programming language)]] and [[COPL]] -->
Today, the most commonly used programming languages that have specific constructs for concurrency are [[Java (programming language)|Java]] and [[C Sharp (programming language)|C#]]. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by [[Monitor (synchronization)|monitors]] (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, [[Erlang (programming language)|Erlang]] is probably the most widely used in industry at present.{{Citation needed|date=August 2010}}
 
[[List of concurrent programming languages|Concurrent programming languages]] are programming languages that use language constructs for [[concurrency (computer science)|concurrency]]. These constructs may involve [[Thread (computer science)|multi-threading]], support for [[distributed computing]], [[message passing programming|message passing]], [[sharing|shared resources]] (including [[Parallel Random Access Machine|shared memory]]) or [[futures and promises]]. Such languages are sometimes described as ''concurrency-oriented languages'' or ''concurrency-oriented programming languages'' (COPL).<ref name="armstrong2003">{{cite web |last1=Armstrong |first1=Joe |year=2003 |title=Making reliable distributed systems in the presence of software errors |last1url=Armstrong|first1=Joe|year=2003http://www.diva-portal.org/smash/get/diva2:9492/FULLTEXT01.pdf |archive-url=https://web.archive.org/web/20160415213739/http://www.diva-portal.org/smash/get/diva2:9492/FULLTEXT01.pdf |archive-date=2016-04-15}}</ref>
 
Today, the most commonly used programming languages that have specific constructs for concurrency are [[Java (programming language)|Java]] and [[C Sharp (programming language)|C#]]. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by [[Monitor (synchronization)|monitors]] (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, [[Erlang (programming language)|Erlang]] iswas probably the most widely used in industry atas presentof 2010.{{Citation needed|date=August 2010}}
 
Many concurrent programming languages have been developed more as research languages (e.g., [[Pict (programming language)|Pict]]) rather than as languages for production use. However, languages such as [[Erlang (programming language)|Erlang]], [[Limbo (programming language)|Limbo]], and [[occam (programming language)|occam]] have seen industrial use at various times in the last 20 years. LanguagesA innon-exhaustive list of languages which concurrencyuse playsor anprovide importantconcurrent roleprogramming includefacilities:
 
* [[Ada (programming language)|Ada]]—general purpose, with native support for message passing and monitor based concurrency
Line 122 ⟶ 126:
* [[Axum (programming language)|Axum]]—___domain specific, concurrent, based on actor model and .NET Common Language Runtime using a C-like syntax
* [[BMDFM]]—Binary Modular DataFlow Machine
* [[C++]]—thread and coroutine support libraries<ref>{{Cite web |title=Standard library header <thread> (C++11) |url=https://en.cppreference.com/w/cpp/header/thread |access-date=2024-10-03 |website=en.cppreference.com}}</ref><ref>{{Cite web |title=Standard library header <coroutine> (C++20) |url=https://en.cppreference.com/w/cpp/header/coroutine |access-date=2024-10-03 |website=en.cppreference.com}}</ref>
* [[C++]]—std::thread
* [[Cω]] (C omega)—for research, extends C#, uses asynchronous communication
* [[C Sharp (programming language)|C#]]—supports concurrent computing using {{Mono|lock}}, {{Mono|yield}}, also since version 5.0 {{Mono|async}} and {{Mono|await}} keywords introduced
* [[Clojure]]—modern, [[Functionalfunctional programming|functional]] dialect of [[Lisp (programming language)|Lisp]] on the [[Java (software platform)|Java]] platform
* [[Concurrent Clean]]—functional programming, similar to [[Haskell (programming language)|Haskell]]
* [[Concurrent Collections]] (CnC)—Achieves implicit parallelism independent of memory model by explicitly defining flow of data and control
* [[Concurrent Haskell]]—lazy, pure functional language operating concurrent processes on shared memory
Line 137 ⟶ 141:
* [[Eiffel (programming language)|Eiffel]]—through its [[SCOOP (software)|SCOOP]] mechanism based on the concepts of Design by Contract
* [[Elixir (programming language)|Elixir]]—dynamic and functional meta-programming aware language running on the Erlang VM.
* [[Erlang (programming language)|Erlang]]—uses synchronous or asynchronous message passing with nothingno shared memory
* [[FAUST (programming language)|FAUST]]—real-time functional, for signal processing, compiler provides automatic parallelization via [[OpenMP]] or a specific [[Cilk#Work-stealing|work-stealing]] scheduler
* [[Fortran]]—[[Coarray Fortran|coarrays]] and ''do concurrent'' are part of Fortran 2008 standard
* [[Go (programming language)|Go]]—for system programming, with a concurrent programming model based on [[Communicating sequential processes|CSP]]
* [[Haskell programming language|Haskell]]—concurrent, and parallel functional programming language<ref> Marlow, Simon (2013) Parallel and Concurrent Programming in Haskell : Techniques for Multicore and Multithreaded Programming {{ISBN|9781449335946}}</ref>
* [[Hume (programming language)|Hume]]—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing
* [[Io (programming language)|Io]]—actor-based concurrency
Line 147 ⟶ 151:
* [[Java (programming language)|Java]]—thread class or Runnable interface
* [[Julia (programming language)|Julia]]—"concurrent programming primitives: Tasks, async-wait, Channels."<!--parallel programming primitives: adding physical processes, remote call, spawn, @parallel macro and pmap
Parallelism provided in library land such as MPI.jl--><ref>{{Cite web |date= |title=Concurrent and Parallel programming in Julia — JuliaCon India 2015 — HasGeek Talkfunnel |url=https://juliacon.talkfunnel.com/2015/21-concurrent-and-parallel-programming-in-julia Concurrent |archive-url=https://web.archive.org/web/20161018061906/https://juliacon.talkfunnel.com/2015/21-concurrent-and Parallel -parallel-programming -in-julia Julia|archive-date=2016-10-18 |access-date= |website=juliacon.talkfunnel.com}}</ref>
* [[JavaScript]]—via [[web worker]]s, in a browser environment, [[Futures and promises|promises]], and [[Callback (computer programming)|callbacks]].
* [[JoCaml]]—concurrent and distributed channel based, extension of [[OCaml]], implements the [[join-calculus]] of processes
Line 155 ⟶ 159:
* [[LabVIEW]]—graphical, dataflow, functions are nodes in a graph, data is wires between the nodes; includes object-oriented language
* [[Limbo (programming language)|Limbo]]—relative of [[Alef (programming language)|Alef]], for system programming in [[Inferno (operating system)]]
* [[Locomotive BASIC]]—Amstrad variant of BASIC contains EVERY and AFTER commands for concurrent subroutines
* [[MultiLisp]]—[[Scheme (programming language)|Scheme]] variant extended to support parallelism
* [[Modula-2]]—for system programming, by N. Wirth as a successor to Pascal with native support for coroutines
Line 161 ⟶ 166:
* [[occam (programming language)|occam]]—influenced heavily by [[communicating sequential processes]] (CSP)
** [[occam-π]]—a modern variant of [[occam (programming language)|occam]], which incorporates ideas from Milner's [[π-calculus]]
* [[Object REXX|ooRexx]]—object-based, message exchange for communication and synchronization
* [[Orc (programming language)|Orc]]—heavily concurrent, nondeterministic, based on [[Kleene algebra]]
* [[Oz (programming language)|Oz-Mozart]]—multiparadigm, supports shared-state and message-passing concurrency, and futures
* [[ParaSail (programming language)|ParaSail]]—object-oriented, parallel, free of pointers, race conditions
* [[PHP]]—multithreading support with parallel extension implementing message passing inspired from [[Go (programming language)|Go]]<ref>{{Cite web |title=PHP: parallel - Manual |url=https://www.php.net/manual/en/book.parallel.php |access-date=2024-10-03 |website=www.php.net |language=en}}</ref>
* [[Pict (programming language)|Pict]]—essentially an executable implementation of Milner's [[π-calculus]]
* [[Python (programming language)|Python]] — uses thread-based parallelism and process-based parallelism <ref>[https://docs.python.org/3/library/concurrency.html Documentation » The Python Standard Library » Concurrent Execution]</ref>
*[[Raku (programming language)|Raku]] includes classes for threads, promises and channels by default<ref>{{Cite web|url=https://docs.perl6.org/language/concurrency|title=Concurrency|website=docs.perl6.org|language=en|access-date=2017-12-24}}</ref>
* [[Python (programming language)|Python]] using [[Stackless Python]]
* [[Reia (programming language)|Reia]]—uses asynchronous message passing between shared-nothing objects
* [[Red (programming language)|Red/System]]—for system programming, based on [[Rebol]]
* [[Rust (programming language)|Rust]]—for system programming, using message-passing with move semantics, shared immutable memory, and shared mutable memory.<ref name="bblum2012">{{cite web|url=http://winningraceconditions.blogspot.com/2012/09/rust-4-typesafe-shared-mutable-state.html|title=Typesafe Shared Mutable State|last1=Blum|first1=Ben|accessdateaccess-date=2012-11-14|year=2012}}</ref>
* [[Scala (programming language)|Scala]]—general purpose, designed to express common programming patterns in a concise, elegant, and type-safe way
* [[SequenceL]]—general purpose functional, main design objectives are ease of programming, code clarity-readability, and automatic parallelization for performance on multicore hardware, and provably free of [[race condition]]s
* [[SR language|SR]]—for research
* [[SuperPascal]]—concurrent, for teaching, built on [[Concurrent Pascal]] and [[Joyce (programming language)|Joyce]] by [[Per Brinch Hansen]]
* [[Swift (programming language)|Swift]]—built-in support for writing asynchronous and parallel code in a structured way<ref>{{cite web|url=https://docs.swift.org/swift-book/LanguageGuide/Concurrency.html|title=Concurrency|language=en|access-date=2022-12-15|year=2022}}</ref>
* [[Unicon (programming language)|Unicon]]—for research
* [[TNSDL]]—for developing telecommunication exchanges, uses asynchronous message passing
Line 186 ⟶ 194:
* [[Flow-based programming]]
* [[Java ConcurrentMap]]
* [[List of important publications in concurrent, parallel, and distributed computing]]
* [[Ptolemy Project]]
* {{Section link|Race condition|Computing}}
* [[SheafStructured (mathematics)concurrency]]
* [[Transaction processing]]
 
Line 200 ⟶ 207:
==Sources==
{{refbegin}}
* {{Cite book | isbn = 978-0-12407886-4 | title = Computer Organization and Design: The Hardware/Software Interface | last1 = Patterson | first1 = David A. | last2 = Hennessy | first2 = John L. | year = 2013 | edition = 5 | publisher = Morgan Kaufmann | series = The Morgan Kaufmann Series in Computer Architecture and Design | pages = | ref = harv }}
{{refend}}
 
==Further reading==
* {{Cite journal | last = Dijkstra | first = E. W. | authorlinkauthor-link = Edsger W. Dijkstra | title = Solution of a problem in concurrent programming control | doi = 10.1145/365559.365617 | journal = [[Communications of the ACM]]| volume = 8 | issue = 9 | pages = 569 | year = 1965 | refs2cid = harv19357737 | doi-access = free }}
* {{cite book |last=Herlihy |first=Maurice |title=The Art of Multiprocessor Programming |year=2008 |origyearorig-year=2008 |publisher=Morgan Kaufmann |isbn=978-0123705914}}
* {{cite book |last=Downey |first=Allen B. |title=The Little Book of Semaphores |year=2005 |origyearorig-year=2005 |url=http://www.greenteapress.com/semaphores/downey08semaphores.pdf |publisher=Green Tea Press |isbn=978-1-4414-1868-5 |access-date=2009-11-21 |archive-url=https://web.archive.org/web/20160304031330/http://www.greenteapress.com/semaphores/downey08semaphores.pdf |archive-date=2016-03-04 |url-status=dead }}
* {{cite book |last=Filman |first=Robert E. |author2=Daniel P. Friedman |title=Coordinated Computing: Tools and Techniques for Distributed Software |publisher=McGraw-Hill |___location=New York |isbn=978-0-07-022439-1 |page=[https://archive.org/details/coordinatedcompu0000film/page/370 370] |year=1984 |url=https://archive.org/details/coordinatedcompu0000film/page/370 }}
* {{cite book |last=Leppäjärvi |first=Jouni |title=A pragmatic, historically oriented survey on the universality of synchronization primitives |year=2008 | url=http://www.enseignement.polytechnique.fr/informatique/INF431/X09-2010-2011/AmphiTHC/SynchronizationPrimitives.pdf |publisher=University of Oulu |access-date=2012-09-13 |archive-date=2017-08-30 |archive-url=https://web.archive.org/web/20170830062719/http://www.enseignement.polytechnique.fr/informatique/INF431/X09-2010-2011/AmphiTHC/SynchronizationPrimitives.pdf |url-status=dead }}
* {{cite book |last=Taubenfeld |first=Gadi |title=Synchronization Algorithms and Concurrent Programming |url=http://www.faculty.idc.ac.il/gadi/book.htm |publisher=Pearson / Prentice Hall |isbn=978-0-13-197259-9 |year=2006 |page=433}}
 
Line 215 ⟶ 222:
*[https://web.archive.org/web/20060128114620/http://vl.fmnet.info/concurrent/ Concurrent Systems Virtual Library]
 
{{Edsger Dijkstra}}
{{Concurrent computing}}
{{Programming languageparadigms navbox}}
{{Types of programming languages}}
 
[[Category:Concurrent computing| ]]
[[Category:Operating system technology]]
[[Category:Edsger W. Dijkstra]]
[[Category:Dutch inventions]]