Synchronization (computer science): Difference between revisions

Content deleted Content added
Tags: Reverted Mobile edit Mobile web edit
m Minimization: Fixed grammar
Tags: Mobile edit Mobile app edit Android app edit App section source
 
(44 intermediate revisions by 24 users not shown)
Line 1:
{{short description|Concept in computer science, referring to processes, or data}}
{{Main articledistinguish|Data synchronization}}
{{Use American English|date=October 2020}}
{{More citations needed|date=November 2014}}
In [[computer science]], '''synchronization''' refers to one of two distinct but related concepts: synchronization of [[Process (computer science)|processes]], and synchronization of [[Dataset|data]]. ''Process synchronization'' refers to the idea that multiple processes are to join up or [[Handshake (computing)|handshake]] at a certain point, in order to reach an agreement or commit to a certain sequence of action. ''[[Data synchronization]]'' refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain [[data integrity]]. Process synchronization primitives are commonly used to implement data synchronization.
 
In [[computer science]], '''synchronization''' is the task of coordinating multiple [[Process (computer science)|processes]] to join up or [[Handshake (computing)|handshake]] at a certain point, in order to reach an agreement or commit to a certain sequence of action.
==The need for synchronization==
 
==Motivation==
The need for synchronization does not arise merely in multi-processor systems but for any kind of concurrent processes; even in single processor systems. Mentioned below are some of the main needs for synchronization:
 
''[[Fork–join model|Forks and Joins]]:'' When a job arrives at a fork point, it is split into N sub-jobs which are then serviced by n tasks. After being serviced, each sub-job waits until all other sub-jobs are done processing. Then, they are joined again and leave the system. Thus, parallel programming requires synchronization as all the parallel processes wait for several other processes to occur.
 
''[[Producer–consumer problem|Producer-Consumer:]]'' In a producer-consumer relationship, the consumer process is dependent on the producer process tilluntil the necessary data has been produced.
 
''Exclusive use resources:'' When multiple processes are dependent on a resource and they need to access it at the same time, the operating system needs to ensure that only one processor accesses it at a given point in time. This reduces concurrency.
 
=={{Anchor|TSync}}Thread or process synchronizationRequirements==
[[File:Multiple Processes Accessing the shared resource.png|thumb|'''Figure 1''': Three processes accessing a shared resource ([[critical section]]) simultaneously.]]
 
Thread synchronization is defined as a mechanism which ensures that two or more concurrent [[process (computer science)|processes]] or [[thread (computer science)|threads]] do not simultaneously execute some particular program segment known as [[critical section]]. Processes' access to critical section is controlled by using synchronization techniques. When one thread starts executing the [[critical section]] (serialized segment of the program) the other thread should wait until the first thread finishes. If proper synchronization techniques<ref>{{cite conference|title=More than you ever wanted to know about synchronization: Synchrobench, measuring the impact of the synchronization on concurrent algorithms|author=Gramoli, V.|conference=Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming|pages=1–10|year=2015|publisher=ACM|url=http://sydney.edu.au/engineering/it/~gramoli/doc/pubs/gramoli-synchrobench.pdf}}</ref> are not applied, it may cause a [[race condition#Software|race condition]] where the values of variables may be unpredictable and vary depending on the timings of [[context switch]]es of the processes or threads.
{{cite conference
| title=More than you ever wanted to know about synchronization: Synchrobench, measuring the impact of the synchronization on concurrent algorithms
| author=Gramoli, V.
| conference=Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
| pages=1–10
| year=2015
| publisher=ACM
| url=http://sydney.edu.au/engineering/it/~gramoli/doc/pubs/gramoli-synchrobench.pdf
}}</ref> are not applied, it may cause a [[race condition#Software|race condition]] where the values of variables may be unpredictable and vary depending on the timings of [[context switch]]es of the processes or threads.
 
For example, suppose that there are three processes, namely 1, 2, and 3. All three of them are concurrently executing, and they need to share a common resource (critical section) as shown in Figure 1. Synchronization should be used here to avoid any conflicts for accessing this shared resource. Hence, when Process 1 and 2 both try to access that resource, it should be assigned to only one process at a time. If it is assigned to Process 1, the other process (Process 2) needs to wait until Process 1 frees that resource (as shown in Figure 2).
 
[[File:Shared Resource access in synchronization environment.png|thumb|'''Figure 2''': A process accessing a shared resource if available, based on some synchronization technique.]]
 
Another synchronization requirement which needs to be considered is the order in which particular processes or threads should be executed. For example, one cannot board a plane before buying a ticket. Similarly, one cannot check e-mails before validating the appropriate credentials (for example, user name and password). In the same way, an ATM will not provide any service until it receives a correct PIN.
 
Other than mutual exclusion, synchronization also deals with the following:
* [[Deadlock (computer science)|deadlock]], which occurs when many processes are waiting for a shared resource (critical section) which is being held by some other process. In this case, the processes just keep waiting and execute no further;
* [[Resource starvation|starvation]], which occurs when a process is waiting to enter the critical section, but other processes monopolize the critical section, and the first process is forced to wait indefinitely;
* [[priority inversion]], which occurs when a high-priority process is in the critical section, and it is interrupted by a medium-priority process. This violation of priority rules can happen under certain circumstances and may lead to serious consequences in real-time systems;
* [[busy waiting]], which occurs when a process frequently polls to determine if it has access to a critical section. This frequent polling robs processing time from other processes.
 
==Minimization==
==Minimizing synchronization==
One of the challenges for exascale algorithm design is to minimize or reduce synchronization.
Synchronization takes more time than computation, especially in distributed computing. Reducing synchronization drew attention from computer scientists for decades. Whereas it becomes an increasingly significant problem recently as the gap between the improvement of computing and latency increases. Experiments have shown that (global) communications due to synchronization on distributed computers takes a dominated share in a sparse iterative solver.<ref>{{cite journal|title=Minimizing synchronizations in sparse iterative solvers for distributed supercomputers |author=Shengxin, Zhu and Tongxiang Gu and Xingping Liu|journal=Computers & Mathematics with Applications|volume=67|issue=1|pages=199–209|year=2014|doi=10.1016/j.camwa.2013.11.008|doi-access=free|hdl=10754/668399|hdl-access=free}}</ref> This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient (HPCG),<ref>{{cite web|url=http://hpcg-benchmark.org/|title=HPCG Benchmark}}</ref> for ranking the top 500 supercomputers.
{{cite journal
| title=Minimizing synchronizations in sparse iterative solvers for distributed supercomputers
| author=Shengxin, Zhu and Tongxiang Gu and Xingping Liu
| journal=Computers & Mathematics with Applications
| volume=67
| issue=1
| pages=199–209
| year=2014
| doi=10.1016/j.camwa.2013.11.008
| doi-access=free
}}</ref> This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient(HPCG),<ref>
{{cite web
| url=http://hpcg-benchmark.org/
| title=HPCG Benchmark
}}
</ref> for ranking the top 500 supercomputers.
 
==Problems==
===Classic problems of synchronization===
The following are some classic problems of synchronization:
* [[Producer–consumer problem|The Producer–Consumer Problem]] (also called The Bounded Buffer Problem);
Line 67 ⟶ 44:
These problems are used to test nearly every newly proposed synchronization scheme or primitive.
 
===Hardware synchronizationOverhead ===
Synchronization overheads can significantly impact performance in [[parallel computing]] environments, where merging data from multiple processes can incur costs substantially higher—often by two or more orders of magnitude—than processing the same data on a single thread, primarily due to the additional overhead of [[inter-process communication]] and synchronization mechanisms. <ref>{{Cite book |title=Operating System Concepts |isbn=978-0470128725 |last1=Silberschatz |first1=Abraham |last2=Galvin |first2=Peter B. |last3=Gagne |first3=Greg |date=29 July 2008 |publisher=Wiley }}</ref><ref>{{Cite book |title=Computer Organization and Design MIPS Edition: The Hardware/Software Interface (The Morgan Kaufmann Series in Computer Architecture and Design) |date=2013 |publisher=Morgan Kaufmann |isbn=978-0124077263}}</ref><ref>{{Cite book |title=Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers |date=2005 |publisher=Pearson |isbn=978-0131405639}}</ref>
 
==The need forHardware synchronization==
Many systems provide hardware support for [[critical section]] code.
 
A single processor or [[uniprocessor system]] could disable [[interrupt]]s by executing currently running code without [[Preemption (computing)|preemption]], which is very inefficient on [[Multiprocessing|multiprocessor]] systems.<ref name="Wiley2014">{{cite book|last1=Silberschatz|first1=Abraham|last2=Gagne|first2=Greg|last3=Galvin|first3=Peter Baer|title=Operating System Concepts|date=July 11, 2008|publisher=John Wiley & Sons.|isbn=978-0-470-12872-5|edition=Eighth|chapter=Chapter 6: Process Synchronization}}</ref>
"The key ability we require to implement synchronization in a multiprocessor is a set of [[Hardware primitive|hardware primitives]] with the ability to atomically read and modify a memory ___location. Without such a capability, the cost of building basic synchronization primitives will be too high and will increase as the processor count increases. There are a number of alternative formulations of the basic hardware primitives, all of which provide the ability to atomically read and modify a ___location, together with some way to tell if the read and write were performed atomically. These hardware primitives are the basic building blocks that are used to build a wide variety of user-level synchronization operations, including things such as [[Lock (computer science)|locks]] and [[Barrier (computer science)|barriers]]. In general, architects do not expect users to employ the basic hardware primitives, but instead expect that the primitives will be used by system programmers to build a synchronization library, a process that is often complex and tricky."<ref name="Morgan2011">{{cite book|last1=Hennessy|first1=John L.|last2=Patterson|first2=David A.|title=Computer Architecture: A Quantitative Approach|date=September 30, 2011|publisher=Morgan Kaufmann|isbn=978-0-123-83872-8|edition=Fifth|chapter=Chapter 5: Thread-Level Parallelism}}</ref> Many modern pieces of hardware providesprovide specialsuch atomic hardware instructions, bytwo eithercommon examples being: [[test-and-set]], thewhich operates on a single memory word, orand [[compare-and-swap]], which swaps the contents of two memory words.
 
===Synchronization strategiesSupport in programming languages===
In [[Java (programming language)|Java]], one way to prevent thread interference and memory consistency errors, blocksis ofby codeprefixing area wrappedmethod intosignature with the '''synchronized''' keyword, in which case the lock of the declaring object is used to enforce synchronization. A second way is to wrap a block of code in a ''synchronized(lock_objectsomeObject){...}'' sectionssection, which offers finer-grain control. This forces any thread to acquire the said lock objectof ''someObject'' before it can execute the contained block. The lock is automatically released when the thread which acquired the lock, and is then executing the block, leaves thethis block or enters thea waiting state within the block. Any variable updates, made by a thread in a synchronized block, become visible to other threads when they similarly acquire the lock and execute the block. For either implementation, any object may be used to provide a lock because all Java objects have an ''intrinsic lock'' or ''monitor lock'' associated with them when instantiated.<ref>{{cite web|title=Intrinsic Locks and Synchronization|url=https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html|website=The Java Tutorials|publisher=Oracle|access-date=10 November 2023}}</ref>
 
Java ''synchronized'' blocks, in addition to enabling mutual exclusion and memory consistency, enable signaling—i.e., sending events from threads which have acquired the lock and are executing the code block to those which are waiting for the lock within the block. This means that Java ''synchronized'' sections, therefore, combine the functionality of both [[Lock (computer science)|mutexes]] and [[Event (synchronization primitive)|events]] to ensure synchronization. Such primitivea construct is known as a [[Monitor (synchronization)|synchronization monitor]].
 
The [[.NET Framework]] hasalso uses synchronization primitives.<ref>{{cite web|title=Overview of synchronization primitives|url=https://learn.microsoft.com/en-us/dotnet/standard/threading/overview-of-synchronization-primitives|website=Microsoft Learn|date=September 2022 |publisher=Microsoft|access-date=10 November 2023}}</ref> "Synchronization is designed to be cooperative, demanding that every thread or process follow the synchronization mechanism before accessing protected resources (critical section) for consistent results." In .NET, lockingLocking, signaling, lightweight synchronization types, spinwait and interlocked operations are some of mechanisms related to synchronization in .NET."<ref>{{cite web|title=Synchronization Primitives in .NET framework|last=Rouse|first=Margaret|url=httphttps://msdnwww.microsofttechopedia.com/en-usdefinition/library13390/ms228964%28v=vs.110%29.aspxsynchronization-dot-net|website=MSDN,Techopedia|date=19 TheAugust Microsoft2011 Developer Network|publisher=Microsoft|access-date=2310 November 20142023}}</ref>
Any object may be used as a lock/monitor in Java. The declaring object is a lock object when the whole method is marked with ''synchronized''.
 
Many programming languages support synchronization and entire specialized [[Synchronous programming language|languages]] have been written for [[Embedded software|embedded application]] development where strictly deterministic synchronization is paramount.
The [[.NET Framework]] has synchronization primitives. "Synchronization is designed to be cooperative, demanding that every thread or process follow the synchronization mechanism before accessing protected resources (critical section) for consistent results." In .NET, locking, signaling, lightweight synchronization types, spinwait and interlocked operations are some of mechanisms related to synchronization.<ref>{{cite web|title=Synchronization Primitives in .NET framework|url=http://msdn.microsoft.com/en-us/library/ms228964%28v=vs.110%29.aspx|website=MSDN, The Microsoft Developer Network|publisher=Microsoft|access-date=23 November 2014}}</ref>
 
===Implementation of Synchronization===
 
====Spinlock=Spinlocks===
{{Main article|Spinlock}}
Another effective way of implementing synchronization is by using spinlocks. Before accessing any shared resource or piece of code, every processor checks a flag. If the flag is reset, then the processor sets the flag and continues executing the thread. But, if the flag is set (locked), the threads would keep spinning in a loop and keep checking if the flag is set or not. But, spinlocksSpinlocks are effective only if the flag is reset for lower cycles; otherwise, it can lead to performance issues as it wastes many processor cycles waiting.<ref>{{Cite book|title=Embedded Software Development with ECos|last=Massa|first=Anthony|publisher=Pearson Education Inc|year=2003|isbn=0-13-035473-2}}</ref>
 
====Barriers====
{{Main article|Barrier (computer science)}}
Barriers are simple to implement and provide good responsiveness. They are based on the concept of implementing wait cycles to provide synchronization. Consider three threads running simultaneously, starting from barrier 1. After time t, thread1 reaches barrier 2 but it still has to wait for thread 2 and 3 to reach barrier2 as it does not have the correct data. Once all the threads reach barrier 2 they all start again. After time t, thread 1 reaches barrier3 but it will have to wait for threads 2 and 3 and the correct data again.
 
Thus, in barrier synchronization of multiple threads there will always be a few threads that will end up waiting for other threads as in the above example thread 1 keeps waiting for thread 2 and 3. This results in severe degradation of the process performance.<ref name=":0">{{Citecite journal|last=Meng,book Chen, Pan, Yao, Wu|first doi=Jinglei, Tianzhou, Ping, Jun10.1109/HPCC.2014.148 Minghui|date=2014|title chapter=A speculativeSpeculative mechanismMechanism for barrierBarrier Synchronization sychronization|journal title=2014 IEEE InternationalIntl ConferenceConf on High Performance Computing and Communications (HPCC), 2014 IEEE 6th InternationalIntl SymposiumSymp on Cyberspace Safety and Security (CSS) and, 2014 IEEE 11th InternationalIntl ConferenceConf on Embedded Software and SystemsSyst (HPCC,CSS,ICESS) | date=2014 | last1=Meng | first1=Jinglei | last2=Chen | first2=Tianzhou | last3=Pan | first3=Ping | last4=Yao | first4=Jun | last5=Wu | first5=Minghui | pages=858–865 | isbn=978-1-4799-6123-8 }}</ref>
 
The barrier synchronization wait function for i<sup>th</sup> thread can be represented as:
 
(Wbarrier)i = f ((Tbarrier)i, (Rthread)i)
 
Where Wbarrier is the wait time for a thread, Tbarrier is the number of threads has arrived, and Rthread is the arrival rate of threads.<ref>{{Cite book |doi=10.1109/ICIEV.2012.6317471 |isbn=978-1-4673-1154-0|chapter=Process synchronization in multiprocessor and multi-core processor|title=2012 International Conference on Informatics, Electronics & Vision (ICIEV)|pages=554–559|year=2012|last1=Rahman|first1=Mohammed Mahmudur|s2cid=8134329 }}</ref>
 
Experiments show that 34% of the total execution time is spent in waiting for other slower threads.<ref name=":0" />
 
====Semaphores====
{{Main article|Semaphore (programming)}}
Semaphores are signalling mechanisms which can allow one or more threads/processors to access a section. A Semaphore has a flag which has a certain fixed value associated with it and each time a thread wishes to access the section, it decrements the flag. Similarly, when the thread leaves the section, the flag is incremented. If the flag is zero, the thread cannot access the section and gets blocked if it chooses to wait.
Line 108 ⟶ 88:
Some semaphores would allow only one thread or process in the code section. Such Semaphores are called binary semaphore and are very similar to Mutex. Here, if the value of semaphore is 1, the thread is allowed to access and if the value is 0, the access is denied.<ref>{{Cite book|title=Real-Time Concepts for Embedded Systems|last=Li, Yao|first=Qing, Carolyn|publisher=CMP Books|year=2003|isbn=978-1578201242}}</ref>
 
== Distributed transaction ==
===Mathematical foundations===
In [[Event-driven architecture|event driven architectures]], synchronous transactions can be achieved through using [[Request–response|request-response]] paradigm and it can be implemented in two ways: <ref name=":02">{{Cite book |last=Richards |first=Mark |title=Fundamentals of Software Architecture: An Engineering Approach |date=2020 |publisher=O'Reilly Media |isbn=978-1492043454}}</ref>
 
* Creating two separate [[Message queue|queues]]: one for requests and the other for replies. The event producer must wait until it receives the response.
* Creating one dedicated ephemeral [[Message queue|queue]] for each request.
 
===Mathematical foundations===
Synchronization was originally a process-based concept whereby a lock could be obtained on an object. Its primary usage was in databases. There are two types of (file) [[File locking|lock]]; read-only and read–write. Read-only locks may be obtained by many processes or threads. Readers–writer locks are exclusive, as they may only be used by a single process/thread at a time.
 
Line 117 ⟶ 103:
An abstract mathematical foundation for synchronization primitives is given by the [[history monoid]]. There are also many higher-level theoretical devices, such as [[process calculi]] and [[Petri net]]s, which can be built on top of the history monoid.
 
== Examples include:==
===Synchronization examples===
Following are some synchronization examples with respect to different platforms.<ref name="Wiley2012">{{cite book|last1=Silberschatz|first1=Abraham|last2=Gagne|first2=Greg|last3=Galvin|first3=Peter Baer|title=Operating System Concepts|date=December 7, 2012|publisher=John Wiley & Sons.|isbn=978-1-118-06333-0|edition=Ninth|chapter=Chapter 5: Process Synchronization}}</ref>
 
====Synchronization inIn Windows====
[[Windows]] provides:
* [[interrupt|interrupt masks]], which protect access to global resources (critical section) on uniprocessor systems;
Line 126 ⟶ 112:
* [[dynamic dispatch]]ers{{citation needed|date=June 2022}}, which act like [[mutual exclusion|mutex]]es, [[Semaphore (programming)|semaphores]], [[Event (computing)|event]]s, and [[timer]]s.
 
====Synchronization inIn Linux====
[[Linux]] provides:
* [[semaphore (programming)|semaphores]];
Line 137 ⟶ 123:
Enabling and disabling of kernel preemption replaced spinlocks on uniprocessor systems. Prior to kernel version 2.6, [[Linux]] disabled interrupt to implement short critical sections. Since version 2.6 and later, Linux is fully preemptive.
 
====Synchronization inIn Solaris====
[[Solaris (operating system)|Solaris]] provides:
* [[Semaphore (programming)|semaphores]];
* [[condition variable]]s;
* adaptive [[adaptiveLock mutex(computer science)|mutexes]]es, binary semaphores that are implemented differently depending upon the conditions;<ref>{{cite web|url=https://docs.oracle.com/cd/E19253-01/817-6223/chp-lockstat-2/index.html|title=Adaptive Lock Probes|website=Oracle Docs}}</ref>
* readers–writerreaders-writer locks:
* [[turnstiles]], queue of threads which are waiting on acquired lock.<ref>{{cite web|url=http://sunsite.uakom.sk/sunworldonline/swol-08-1999/swol-08-insidesolaris.html|title=Turnstiles and priority inheritance - SunWorld - August 1999|first=Jim|last=Mauro|website=sunsite.uakom.sk}}</ref>
 
====PthreadsIn synchronization=Pthreads===
[[Pthreads]] is a platform-independent [[API]] that provides:
* mutexes;
Line 151 ⟶ 137:
* readers–writer locks;
* spinlocks;
* [[barrier (computer science)|barrier]]s.to stop all
 
==Data synchronization==
{{Main article|Data synchronization}}
[[File:Data Synchronization.png|thumb|'''Figure 3: '''Changes from both server and client(s) are synchronized.]]
 
A distinctly different (but related) concept is that of [[data synchronization]]. This refers to the need to update and keep multiple copies of a set of data coherent with one another or to maintain [[data integrity]], Figure 3.<ref>{{Cite journal |last1=Nakatani |first1=Kazuo |last2=Chuang |first2=Ta-Tao |last3=Zhou |first3=Duanning |date=2006 |title=Data Synchronization Technology: Standards, Business Values and Implications |url=http://dx.doi.org/10.17705/1cais.01744 |journal=Communications of the Association for Information Systems |volume=17 |doi=10.17705/1cais.01744 |issn=1529-3181}}</ref> For example, database replication is used to keep multiple copies of data synchronized with database servers that store data in different locations.
 
Examples include:
* [[File synchronization]], such as syncing a hand-held MP3 player to a desktop computer;
* [[Cluster file system]]s, which are [[file system]]s that maintain data or indexes in a coherent fashion across a whole [[computing cluster]];
* [[Cache coherency]], maintaining multiple copies of data in sync across multiple [[cache (computing)|cache]]s;
* [[RAID]], where data is written in a redundant fashion across multiple disks, so that the loss of any one disk does not lead to a loss of data;
* [[Database replication]], where copies of data on a [[database]] are kept in sync, despite possible large geographical separation;
* [[Journaling file system|Journaling]], a technique used by many modern file systems to make sure that file metadata are updated on a disk in a coherent, consistent manner.
 
===Challenges in data synchronization===
Some of the challenges which user may face in data synchronization:
* data formats complexity;
* real-timeliness;
* data security;
* data quality;
* performance.
 
====Data formats complexity====
Data formats tend to grow more complex with time as the organization grows and evolves. This results not only in building simple interfaces between the two applications (source and target), but also in a need to transform the data while passing them to the target application. [[Extract, transform, load|ETL]] (extraction transformation loading) tools can be helpful at this stage for managing data format complexities.
 
====Real-timeliness====
In real-time systems, customers want to see the current status of their order in e-shop, the current status of a parcel delivery—a real time parcel tracking—, the current balance on their account, etc. This shows the need of a real-time system, which is being updated as well to enable smooth manufacturing process in real-time, e.g., ordering material when enterprise is running out stock, synchronizing customer orders with manufacturing process, etc. From real life, there exist so many examples where real-time processing gives successful and competitive advantage.
 
====Data security====
There are no fixed rules and policies to enforce data security. It may vary depending on the system which you are using. Even though the security is maintained correctly in the source system which captures the data, the security and information access privileges must be enforced on the target systems as well to prevent any potential misuse of the information. This is a serious issue and particularly when it comes for handling secret, confidential and personal information. So because of the sensitivity and confidentiality, data transfer and all in-between information must be encrypted.
 
====Data quality====
Data quality is another serious constraint. For better management and to maintain good quality of data, the common practice is to store the data at one ___location and share with different people and different systems and/or applications from different locations. It helps in preventing inconsistencies in the data.
 
====Performance====
There are five different phases involved in the data synchronization process:
* [[data extraction]] from the source (or master, or main) system;
* [[data transfer]];
* [[data transformation]];
* data load to the target system.
* data updation
 
Each of these steps is critical. In case of large amounts of data, the synchronization process needs to be carefully planned and executed to avoid any negative impact on performance.
 
==See also==
* [[Futures and promises]], synchronization mechanisms in pure functional paradigms
* [[Memory barrier]]
 
==References==
{{reflist}}
* {{cite book | last=Schneider | first=Fred B. | title=On concurrent programming | publisher=Springer-Verlag New York, Inc.| date=1997 | isbn=978-0-387-94942-0}}
 
==External links==
Line 215 ⟶ 158:
[[Category:Computer-mediated communication]]
[[Category:Synchronization|Computer science]]
[[Category:Edsger W. Dijkstra]]