Content deleted Content added
m Removed erroneous space and general fixes (task 1) |
Eatingbugs (talk | contribs) m Excess whitespace |
||
Line 4:
{{More citations needed|date=November 2014}}
In [[computer science]], '''synchronization''' is the task of coordinating multiple
==Motivation==
Line 13:
''[[Producer–consumer problem|Producer-Consumer:]]'' In a producer-consumer relationship, the consumer process is dependent on the producer process until the necessary data has been produced.
''Exclusive use resources:'' When multiple processes are dependent on a resource and they need to access it at the same time, the operating system needs to ensure that only one processor accesses it at a given point in time. This reduces
=={{Anchor|TSync}}Requirements==
[[File:Multiple Processes Accessing the shared resource.png|thumb|'''Figure 1''': Three processes accessing a shared resource ([[critical section]]) simultaneously.]]
Thread synchronization is defined as a mechanism which ensures that two or more concurrent [[process (computer science)|processes]] or [[thread (computer science)|threads]] do not simultaneously execute some particular program segment known as [[critical section]]. Processes' access to critical section is controlled by using synchronization techniques. When one thread starts executing the [[critical section]] (serialized segment of the program) the other thread should wait until the first thread finishes. If proper synchronization techniques<ref>{{cite conference|title=More than you ever wanted to know about synchronization: Synchrobench, measuring the impact of the synchronization on concurrent algorithms|author=Gramoli, V.|conference=Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming|pages=1–10|year=2015|publisher=ACM|url=http://sydney.edu.au/engineering/it/~gramoli/doc/pubs/gramoli-synchrobench.pdf}}</ref> are not applied, it may cause a [[race condition#Software|race condition]] where the values of variables may be unpredictable and vary depending on the timings of [[context switch]]es of the processes or threads.
For example, suppose that there are three processes, namely 1, 2, and 3. All three of them are concurrently executing, and they need to share a common resource (critical section) as shown in Figure 1. Synchronization should be used here to avoid any conflicts for accessing this shared resource. Hence, when Process 1 and 2 both try to access that resource, it should be assigned to only one process at a time.
[[File:Shared Resource access in synchronization environment.png|thumb|'''Figure 2''': A process accessing a shared resource if available, based on some synchronization technique.]]
Another synchronization requirement which needs to be considered is the order in which particular processes or threads should be executed. For example, one cannot board a plane before buying a ticket.
Other than mutual exclusion, synchronization also deals with the following:
Line 43 ⟶ 34:
==Minimization==
One of the challenges for exascale algorithm design is to minimize or reduce synchronization.
Synchronization takes more time than computation, especially in distributed computing. Reducing synchronization drew attention from computer scientists for decades. Whereas it becomes an increasingly significant problem recently as the gap between the improvement of computing and latency increases. Experiments have shown that (global) communications due to synchronization on distributed computers takes a dominated share in a sparse iterative solver.<ref>{{cite journal|title=Minimizing synchronizations in sparse iterative solvers for distributed supercomputers |author=Shengxin, Zhu and Tongxiang Gu and Xingping Liu|journal=Computers & Mathematics with Applications|volume=67|issue=1|pages=199–209|year=2014|doi=10.1016/j.camwa.2013.11.008|doi-access=free}}</ref> This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient(HPCG),<ref>{{cite web|url=http://hpcg-benchmark.org/|title=HPCG Benchmark}}</ref> for ranking the top 500 supercomputers.
==Classic problems==
Line 76 ⟶ 51:
==Support in programming languages==
In [[Java (programming language)|Java]], one way to prevent thread interference and memory consistency errors, is by prefixing a method signature with the ''synchronized'' keyword, in which case the lock of the declaring object is used to enforce synchronization.
Java ''synchronized'' blocks, in addition to enabling mutual exclusion and memory consistency, enable signaling—i.e. sending events from threads which have acquired the lock and are executing the code block to those which are waiting for the lock within the block.
The [[.NET Framework]] also uses synchronization primitives.<ref>{{cite web|title=Overview of synchronization primitives|url=https://learn.microsoft.com/en-us/dotnet/standard/threading/overview-of-synchronization-primitives|website=Microsoft Learn|publisher=Microsoft|access-date=10 November 2023}}</ref> "Synchronization is designed to be cooperative, demanding that every thread follow the synchronization mechanism before accessing protected resources for consistent results.
Many programming languages support synchronization and entire specialized [[Synchronous programming language|languages]] have been written for [[Embedded software|embedded application]] development where strictly deterministic synchronization is paramount.
Line 98 ⟶ 73:
The barrier synchronization wait function for i<sup>th</sup> thread can be represented as:
(Wbarrier)i
Where Wbarrier is the wait time for a thread, Tbarrier is the number of threads has arrived, and Rthread is the arrival rate of threads.<ref>{{Cite book
Experiments show that 34% of the total execution time is spent in waiting for other slower threads.<ref name=":0" />
Line 160 ⟶ 135:
==References==
{{reflist}}
* {{cite book
==External links==
|