Synchronization (computer science): Difference between revisions

Content deleted Content added
SdkbBot (talk | contribs)
m Removed erroneous space and general fixes (task 1)
m Excess whitespace
Line 4:
{{More citations needed|date=November 2014}}
 
In [[computer science]], '''synchronization''' is the task of coordinating multiple of [[Process (computer science)|processes]] to join up or [[Handshake (computing)|handshake]] at a certain point, in order to reach an agreement or commit to a certain sequence of action.
 
==Motivation==
Line 13:
''[[Producer–consumer problem|Producer-Consumer:]]'' In a producer-consumer relationship, the consumer process is dependent on the producer process until the necessary data has been produced.
 
''Exclusive use resources:'' When multiple processes are dependent on a resource and they need to access it at the same time, the operating system needs to ensure that only one processor accesses it at a given point in time. This reduces concurrency.
 
=={{Anchor|TSync}}Requirements==
[[File:Multiple Processes Accessing the shared resource.png|thumb|'''Figure 1''': Three processes accessing a shared resource ([[critical section]]) simultaneously.]]
 
Thread synchronization is defined as a mechanism which ensures that two or more concurrent [[process (computer science)|processes]] or [[thread (computer science)|threads]] do not simultaneously execute some particular program segment known as [[critical section]]. Processes' access to critical section is controlled by using synchronization techniques. When one thread starts executing the [[critical section]] (serialized segment of the program) the other thread should wait until the first thread finishes. If proper synchronization techniques<ref>{{cite conference|title=More than you ever wanted to know about synchronization: Synchrobench, measuring the impact of the synchronization on concurrent algorithms|author=Gramoli, V.|conference=Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming|pages=1–10|year=2015|publisher=ACM|url=http://sydney.edu.au/engineering/it/~gramoli/doc/pubs/gramoli-synchrobench.pdf}}</ref> are not applied, it may cause a [[race condition#Software|race condition]] where the values of variables may be unpredictable and vary depending on the timings of [[context switch]]es of the processes or threads.
{{cite conference
| title=More than you ever wanted to know about synchronization: Synchrobench, measuring the impact of the synchronization on concurrent algorithms
| author=Gramoli, V.
| conference=Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
| pages=1–10
| year=2015
| publisher=ACM
| url=http://sydney.edu.au/engineering/it/~gramoli/doc/pubs/gramoli-synchrobench.pdf
}}</ref> are not applied, it may cause a [[race condition#Software|race condition]] where the values of variables may be unpredictable and vary depending on the timings of [[context switch]]es of the processes or threads.
 
For example, suppose that there are three processes, namely 1, 2, and 3. All three of them are concurrently executing, and they need to share a common resource (critical section) as shown in Figure 1. Synchronization should be used here to avoid any conflicts for accessing this shared resource. Hence, when Process 1 and 2 both try to access that resource, it should be assigned to only one process at a time. If it is assigned to Process 1, the other process (Process 2) needs to wait until Process 1 frees that resource (as shown in Figure 2).
 
[[File:Shared Resource access in synchronization environment.png|thumb|'''Figure 2''': A process accessing a shared resource if available, based on some synchronization technique.]]
 
Another synchronization requirement which needs to be considered is the order in which particular processes or threads should be executed. For example, one cannot board a plane before buying a ticket. Similarly, one cannot check e-mails before validating the appropriate credentials (for example, user name and password). In the same way, an ATM will not provide any service until it receives a correct PIN.
 
Other than mutual exclusion, synchronization also deals with the following:
Line 43 ⟶ 34:
==Minimization==
One of the challenges for exascale algorithm design is to minimize or reduce synchronization.
Synchronization takes more time than computation, especially in distributed computing. Reducing synchronization drew attention from computer scientists for decades. Whereas it becomes an increasingly significant problem recently as the gap between the improvement of computing and latency increases. Experiments have shown that (global) communications due to synchronization on distributed computers takes a dominated share in a sparse iterative solver.<ref>{{cite journal|title=Minimizing synchronizations in sparse iterative solvers for distributed supercomputers |author=Shengxin, Zhu and Tongxiang Gu and Xingping Liu|journal=Computers & Mathematics with Applications|volume=67|issue=1|pages=199–209|year=2014|doi=10.1016/j.camwa.2013.11.008|doi-access=free}}</ref> This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient(HPCG),<ref>{{cite web|url=http://hpcg-benchmark.org/|title=HPCG Benchmark}}</ref> for ranking the top 500 supercomputers.
{{cite journal
| title=Minimizing synchronizations in sparse iterative solvers for distributed supercomputers
| author=Shengxin, Zhu and Tongxiang Gu and Xingping Liu
| journal=Computers & Mathematics with Applications
| volume=67
| issue=1
| pages=199–209
| year=2014
| doi=10.1016/j.camwa.2013.11.008
| doi-access=free
}}</ref> This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient(HPCG),<ref>
{{cite web
| url=http://hpcg-benchmark.org/
| title=HPCG Benchmark
}}
</ref> for ranking the top 500 supercomputers.
 
==Classic problems==
Line 76 ⟶ 51:
 
==Support in programming languages==
In [[Java (programming language)|Java]], one way to prevent thread interference and memory consistency errors, is by prefixing a method signature with the ''synchronized'' keyword, in which case the lock of the declaring object is used to enforce synchronization. A second way is to wrap a block of code in a ''synchronized(someObject){...}'' section, which offers finer-grain control. This forces any thread to acquire the lock of ''someObject'' before it can execute the contained block. The lock is automatically released when the thread which acquired the lock leaves this block or enters a waiting state within the block. Any variable updates made by a thread in a synchronized block become visible to other threads when they similarly acquire the lock and execute the block. For either implementation, any object may be used to provide a lock because all Java objects have an ''intrinsic lock'' or ''monitor lock'' associated with them when instantiated.<ref>{{cite web|title=Intrinsic Locks and Synchronization|url=https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html|website=The Java Tutorials|publisher=Oracle|access-date=10 November 2023}}</ref>
 
Java ''synchronized'' blocks, in addition to enabling mutual exclusion and memory consistency, enable signaling—i.e. sending events from threads which have acquired the lock and are executing the code block to those which are waiting for the lock within the block. Java ''synchronized'' sections, therefore, combine the functionality of both [[Lock (computer science)|mutexes]] and [[Event (synchronization primitive)|events]] to ensure synchronization. Such a construct is known as a [[Monitor (synchronization)|synchronization monitor]].
 
The [[.NET Framework]] also uses synchronization primitives.<ref>{{cite web|title=Overview of synchronization primitives|url=https://learn.microsoft.com/en-us/dotnet/standard/threading/overview-of-synchronization-primitives|website=Microsoft Learn|publisher=Microsoft|access-date=10 November 2023}}</ref> "Synchronization is designed to be cooperative, demanding that every thread follow the synchronization mechanism before accessing protected resources for consistent results. Locking, signaling, lightweight synchronization types, spinwait and interlocked operations are mechanisms related to synchronization in .NET."<ref>{{cite web|title=Synchronization|last=Rouse|first=Margaret|url=https://www.techopedia.com/definition/13390/synchronization-dot-net|website=Techopedia|publisher=Techopedia|access-date=10 November 2023}}</ref>
 
Many programming languages support synchronization and entire specialized [[Synchronous programming language|languages]] have been written for [[Embedded software|embedded application]] development where strictly deterministic synchronization is paramount.
Line 98 ⟶ 73:
The barrier synchronization wait function for i<sup>th</sup> thread can be represented as:
 
(Wbarrier)i = f ((Tbarrier)i, (Rthread)i)
 
Where Wbarrier is the wait time for a thread, Tbarrier is the number of threads has arrived, and Rthread is the arrival rate of threads.<ref>{{Cite book |doi=10.1109/ICIEV.2012.6317471 |isbn=978-1-4673-1154-0|chapter=Process synchronization in multiprocessor and multi-core processor|title=2012 International Conference on Informatics, Electronics & Vision (ICIEV)|pages=554–559|year=2012|last1=Rahman|first1=Mohammed Mahmudur|s2cid=8134329 }}</ref>
 
Experiments show that 34% of the total execution time is spent in waiting for other slower threads.<ref name=":0" />
Line 160 ⟶ 135:
==References==
{{reflist}}
* {{cite book | last=Schneider | first=Fred B. | title=On concurrent programming | publisher=Springer-Verlag New York, Inc.| date=1997 | isbn=978-0-387-94942-0}}
 
==External links==