Synchronization (computer science): Difference between revisions

Content deleted Content added
mNo edit summary
m Minimization: Fixed grammar
Tags: Mobile edit Mobile app edit Android app edit App section source
 
(11 intermediate revisions by 9 users not shown)
Line 34:
==Minimization==
One of the challenges for exascale algorithm design is to minimize or reduce synchronization.
Synchronization takes more time than computation, especially in distributed computing. Reducing synchronization drew attention from computer scientists for decades. Whereas it becomes an increasingly significant problem recently as the gap between the improvement of computing and latency increases. Experiments have shown that (global) communications due to synchronization on distributed computers takes a dominated share in a sparse iterative solver.<ref>{{cite journal|title=Minimizing synchronizations in sparse iterative solvers for distributed supercomputers |author=Shengxin, Zhu and Tongxiang Gu and Xingping Liu|journal=Computers & Mathematics with Applications|volume=67|issue=1|pages=199–209|year=2014|doi=10.1016/j.camwa.2013.11.008|doi-access=free|hdl=10754/668399|hdl-access=free}}</ref> This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient (HPCG),<ref>{{cite web|url=http://hpcg-benchmark.org/|title=HPCG Benchmark}}</ref> for ranking the top 500 supercomputers.
 
==Problems==
==Classic problems==
The following are some classic problems of synchronization:
* [[Producer–consumer problem|The Producer–Consumer Problem]] (also called The Bounded Buffer Problem);
Line 43:
 
These problems are used to test nearly every newly proposed synchronization scheme or primitive.
 
=== Overhead ===
Synchronization overheads can significantly impact performance in [[parallel computing]] environments, where merging data from multiple processes can incur costs substantially higher—often by two or more orders of magnitude—than processing the same data on a single thread, primarily due to the additional overhead of [[inter-process communication]] and synchronization mechanisms. <ref>{{Cite book |title=Operating System Concepts |isbn=978-0470128725 |last1=Silberschatz |first1=Abraham |last2=Galvin |first2=Peter B. |last3=Gagne |first3=Greg |date=29 July 2008 |publisher=Wiley }}</ref><ref>{{Cite book |title=Computer Organization and Design MIPS Edition: The Hardware/Software Interface (The Morgan Kaufmann Series in Computer Architecture and Design) |date=2013 |publisher=Morgan Kaufmann |isbn=978-0124077263}}</ref><ref>{{Cite book |title=Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers |date=2005 |publisher=Pearson |isbn=978-0131405639}}</ref>
 
==Hardware synchronization==
Line 55 ⟶ 58:
Java ''synchronized'' blocks, in addition to enabling mutual exclusion and memory consistency, enable signaling—i.e. sending events from threads which have acquired the lock and are executing the code block to those which are waiting for the lock within the block. Java ''synchronized'' sections, therefore, combine the functionality of both [[Lock (computer science)|mutexes]] and [[Event (synchronization primitive)|events]] to ensure synchronization. Such a construct is known as a [[Monitor (synchronization)|synchronization monitor]].
 
The [[.NET Framework]] also uses synchronization primitives.<ref>{{cite web|title=Overview of synchronization primitives|url=https://learn.microsoft.com/en-us/dotnet/standard/threading/overview-of-synchronization-primitives|website=Microsoft Learn|date=September 2022 |publisher=Microsoft|access-date=10 November 2023}}</ref> "Synchronization is designed to be cooperative, demanding that every thread follow the synchronization mechanism before accessing protected resources for consistent results. Locking, signaling, lightweight synchronization types, spinwait and interlocked operations are mechanisms related to synchronization in .NET."<ref>{{cite web|title=Synchronization|last=Rouse|first=Margaret|url=https://www.techopedia.com/definition/13390/synchronization-dot-net|website=Techopedia|date=19 August 2011 |access-date=10 November 2023}}</ref>
 
Many programming languages support synchronization and entire specialized [[Synchronous programming language|languages]] have been written for [[Embedded software|embedded application]] development where strictly deterministic synchronization is paramount.
Line 61 ⟶ 64:
==Implementation==
 
===SpinlockSpinlocks===
{{Main article|Spinlock}}
Another effective way of implementing synchronization is by using spinlocks. Before accessing any shared resource or piece of code, every processor checks a flag. If the flag is reset, then the processor sets the flag and continues executing the thread. But, if the flag is set (locked), the threads would keep spinning in a loop and keep checking if the flag is set or not. But, spinlocksSpinlocks are effective only if the flag is reset for lower cycles; otherwise, it can lead to performance issues as it wastes many processor cycles waiting.<ref>{{Cite book|title=Embedded Software Development with ECos|last=Massa|first=Anthony|publisher=Pearson Education Inc|year=2003|isbn=0-13-035473-2}}</ref>
 
===Barriers===
Line 69 ⟶ 72:
Barriers are simple to implement and provide good responsiveness. They are based on the concept of implementing wait cycles to provide synchronization. Consider three threads running simultaneously, starting from barrier 1. After time t, thread1 reaches barrier 2 but it still has to wait for thread 2 and 3 to reach barrier2 as it does not have the correct data. Once all the threads reach barrier 2 they all start again. After time t, thread 1 reaches barrier3 but it will have to wait for threads 2 and 3 and the correct data again.
 
Thus, in barrier synchronization of multiple threads there will always be a few threads that will end up waiting for other threads as in the above example thread 1 keeps waiting for thread 2 and 3. This results in severe degradation of the process performance.<ref name=":0">{{Citecite journal|last=Meng,book Chen, Pan, Yao, Wu|first doi=Jinglei, Tianzhou, Ping, Jun10.1109/HPCC.2014.148 Minghui|date=2014|title chapter=A speculativeSpeculative mechanismMechanism for barrierBarrier Synchronization sychronization|journal title=2014 IEEE InternationalIntl ConferenceConf on High Performance Computing and Communications (HPCC), 2014 IEEE 6th InternationalIntl SymposiumSymp on Cyberspace Safety and Security (CSS) and, 2014 IEEE 11th InternationalIntl ConferenceConf on Embedded Software and SystemsSyst (HPCC,CSS,ICESS) | date=2014 | last1=Meng | first1=Jinglei | last2=Chen | first2=Tianzhou | last3=Pan | first3=Ping | last4=Yao | first4=Jun | last5=Wu | first5=Minghui | pages=858–865 | isbn=978-1-4799-6123-8 }}</ref>
 
The barrier synchronization wait function for i<sup>th</sup> thread can be represented as:
Line 86 ⟶ 89:
 
== Distributed transaction ==
In [[Event-driven architecture|event driven architectures]], synchronous transactions can be achieved through using [[Request–response|request-response]] paradigm and it can be implemented in two ways: <ref name=":02">{{Cite book |last=Richards |first=Mark |title=Fundamentals of Software Architecture: An Engineering Approach |date=2020 |publisher=O'Reilly Media |isbn=978-1492043454}}</ref>
 
* Creating two separate [[Message queue|queuequeues]]<nowiki/>s: one for requests and the other for replies. The event producer must wait until it receives the response.
* Creating one dedicated ephemeral [[Message queue|queue]] for each request.