Consistency model: Difference between revisions

Content deleted Content added
Tag: Reverted
Strict consistency: Add an explanation of the table's values.
 
(41 intermediate revisions by 26 users not shown)
Line 1:
{{short description|A set of formally specified rulesRules that guarantee (orpredictable explicitlycomputer disclaim)memory certain consistencies in the event of concurrent reads or writes to shared memoryoperation}}
{{confusing|date=January 2015}}
 
In [[computer science]], a '''consistency modelsmodel''' specifies a contract between the programmer and a system, wherein the system guarantees that if the programmer follows the rules for operations on memory, memory will be [[data consistency|consistent]] and the results of reading, writing, or updating memory will be predictable. Consistency models are used in [[Distributed computing|distributed systems]] like [[distributed shared memory]] systems or distributed data stores (such as a [[filesystem]]s, [[database]]s, [[optimistic replication]] systems or [[web caching]]). The system is said to support a given model if operations on memory follow specific rules. The data consistency model specifies a contract between programmer and system, wherein the system guarantees that if the programmer follows the rules, memory will be [[consistent]] and the results of reading, writing, or updating memory will be predictable. ThisConsistency is different from coherence, which occurs in systems that are [[cache coherence|cached]] or cache-less, and is consistency of data with respect to all processors. Coherence deals with maintaining a global order in which writes to a single ___location or single variable are seen by all processors. Consistency deals with the ordering of operations to multiple locations with respect to all processors.
 
[[High level language]]s, such as [[C++]] and [[Java (programming language)|Java]], partially maintain the consistency contract by translating memory operations into low-level operations in a way that preserves [[memory semantics (computing)|memory semantics]]. To hold to the contract, compilers may reorderreordering some memory instructions, and encapsulating required synchronization with library calls such as <code>pthread_mutex_lock()</code> encapsulate required synchronization.<ref>{{cite journal
| author = Mark D. Hill
| title = Multiprocessors Should Support Simple Memory Consistency Models
Line 14:
| doi = 10.1109/2.707614
| url = http://digital.library.wisc.edu/1793/8706
}}</ref>
 
Verifying [[sequential consistency]] through [[model checking]] is [[undecidable problem|undecidable]] in general, even for finite-state [[cache coherence]] protocols.<ref>{{cite journal
| author = Shaz Qadeer
| title = Verifying Sequential Consistency on Shared-Memory Multiprocessors by Model Checking
| date = August 2003
| journal = IEEE Transactions on Parallel and Distributed Systems
| volume = 14
| issue = 8
| pages = 730–741
| doi = 10.1109/TPDS.2003.1225053
| arxiv = cs/0108016
}}</ref>
 
Consistency models define rules for the apparent order and visibility of updates, and are on a [[linear continuum|continuum]] with tradeoffs.<ref name="Rules">{{cite web
| access-date = 2011-03-24
| author = Todd Lipcon
| date = 2014-10-25
| title = Design Patterns for Distributed Non-Relational Databases
| quote = A consistency model determines rules for visibility and apparent order of updates. Example: * Row X is replicated on nodes M and N * Client A writes row X to node N * Some period of time t elapses. * Client B reads row X from node M * Does client B see the write from client A? Consistency is a continuum with tradeoffs
| url = http://cloudera-todd.s3.amazonaws.com/nosql.pdf
}}</ref>
 
Line 43 ⟶ 22:
* After a period of time t, client B reads row X from node N
 
The consistency model has to determinedetermines whether client B seeswill definitely see the write performed by client A, orwill definitely not, or cannot depend on seeing the write.
 
== Types ==
 
Consistency models define rules for the apparent order and visibility of updates, and are on a [[linear continuum|continuum]] with tradeoffs.<ref name="Rules">{{cite web
There are two methods to define and categorize consistency models; issue and view.
| access-date = 2011-03-24
| author = Todd Lipcon
| date = 2014-10-25
| title = Design Patterns for Distributed Non-Relational Databases
| quote = A consistency model determines rules for visibility and apparent order of updates. Example: * Row X is replicated on nodes M and N * Client A writes row X to node N * Some period of time t elapses. * Client B reads row X from node M * Does client B see the write from client A? Consistency is a continuum with tradeoffs
| url = http://cloudera-todd.s3.amazonaws.com/nosql.pdf
| archive-date = 2014-11-03
| archive-url = https://web.archive.org/web/20141103164710/http://cloudera-todd.s3.amazonaws.com/nosql.pdf
| url-status = dead
}}</ref> There are two methods to define and categorize consistency models; issue and view.
; Issue: Issue method describes the restrictions that define how a process can issue operations.
; View: View method which defines the order of operations visible to processes.
Line 54 ⟶ 43:
 
These models define how the hardware needs to be laid out and at a high-level, how the programmer must code. The chosen model also affects how the compiler can re-order instructions. Generally, if control dependencies between instructions and if writes to same ___location are ordered, then the compiler can reorder as required. However, with the models described below, some may allow writes before loads to be reordered while some may not.
 
== Strong consistency models ==
 
=== Strict consistency ===
 
Test. Strict consistency is the strongest consistency model. Under this model, a write to a variable by any processor needs to be seen instantaneously by all processors.
 
The strict model diagram and non-strict model diagrams describe the time constraint – instantaneous. It can be better understood as though a global clock is present in which every write should be reflected in all processor caches by the end of that clock period. The next operation must happen only in the next clock period.
 
In the following diagram, P means "process" and the global clock's value is represented in the Sequence column.
{| class="wikitable"
|-
Line 94 ⟶ 87:
 
=== Sequential consistency ===
{{Main|Sequential consistency}}
The [[sequential consistency]] model was proposed by Lamport (1979). It is a weaker memory model than strict consistency model.<ref name="Lamport">{{cite journal
| author = Lamport, Leslie
| title = How to make a multiprocessor computer that correctly executes multiprocess programs.
| date = Sep 1979
| journal = IEEE Transactions on Computers
| volume = C-28
| issue = 9
| pages = 690–691
| doi = 10.1109/TC.1979.1675439
| s2cid = 5679366
}}</ref> A write to a variable does not have to be seen instantaneously, however, writes to variables by different processors have to be seen in the same order by all processors. Sequential consistency is met if "the result of any execution is the same as if the (read and write) operations of all processes on the data store were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program."<ref name="Lamport"/><ref name="Replica"/> Adve and Gharachorloo, 1996<ref name="sharedmemory">{{cite journal
|author1 = Sarita V. Adve
|author2 = Kourosh Gharachorloo
|title = Shared Memory Consistency Models: A Tutorial
|date = December 1996
|journal = IEEE Computer
|volume = 29
|issue = 12
|pages = 66–76
|url = http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf
|access-date = 2008-05-28
|doi = 10.1109/2.546611
|citeseerx = 10.1.1.36.8566
|archive-date = 2008-09-07
|archive-url = https://web.archive.org/web/20080907125534/http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf
|url-status = dead
}}</ref> define two requirements to implement the sequential consistency; program order and write atomicity.
* Program order: Program order guarantees that each process issues a memory request ordered by its program.
* Write atomicity: Write atomicity defines that memory requests are serviced based on the order of a single FIFO queue.
 
In sequential consistency, there is no notion of time or most recent write operations. There are some operations interleaving that is the same for all processes. A process can see the write operations of all processes but it can just see its own read operations. Program order within each processor and sequential ordering of operations between processors should be maintained. In order to preserve sequential order of execution between processors, all operations must appear to execute instantaneously or atomically with respect to every other processor.
The [[sequential consistency]] model was proposed by Lamport(1979). It is a weaker memory model than strict consistency model. A write to a variable does not have to be seen instantaneously, however, writes to variables by different processors have to be seen in the same order by all processors. As defined by Lamport(1979),<ref name="Lamport"/> sequential consistency is met if "the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program."
 
Program order within each processor and sequential ordering of operations between processors should be maintained. In order to preserve sequential order of execution between processors, all operations must appear to execute instantaneously or atomically with respect to every other processor. These operations need only "appear" to be completed because it is physically impossible to send information instantaneously. For instance, in a system utilizing a single globally shared bus, once a bus line is posted with information, it is guaranteed that all processors will see the information at the same instant. Thus, passing the information to the bus line completes the execution with respect to all processors and has appeared to have been executed. Cache-less architectures or cached architectures with interconnect networks that are not instantaneous can contain a slow path between processors and memories. These slow paths can result in sequential inconsistency, because some memories receive the broadcast data faster than others.
 
Sequential consistency can produce non-deterministic results. This is because the sequence of sequential operations between processors can be different during different runs of the program. All memory operations need to happen in the program order.
 
[[Linearizability]]<ref name="Linear">{{cite journal
'''[[Linearizability]]''' (also known as atomic consistency) can be defined as sequential consistency with the real-time constraint.
| author1 = Herlihy, Maurice P.
| author2 = Jeannette M. Wing
| title = "Linearizability: A correctness condition for concurrent objects." ACM Transactions on Programming Languages and Systems
| date = July 1990
| journal = ACM Transactions on Programming Languages and Systems
| volume = 12
| issue = 3
| pages = 463–492
| doi = 10.1145/78969.78972
| citeseerx = 10.1.1.142.5315
| s2cid = 228785
}}</ref> (also known as atomic consistency or atomic memory)<ref name="RelaxedModel"/> can be defined as sequential consistency with a real-time constraint, by considering a begin time and end time for each operation. An execution is linearizable if each operation taking place in linearizable order by placing a point between its begin time and its end time and guarantees sequential consistency.
 
Verifying [[sequential consistency]] through [[model checking]] is [[undecidable problem|undecidable]] in general, even for finite-state [[cache coherence]] protocols.<ref>{{cite journal
| author = Shaz Qadeer
| title = Verifying Sequential Consistency on Shared-Memory Multiprocessors by Model Checking
| date = August 2003
| journal = IEEE Transactions on Parallel and Distributed Systems
| volume = 14
| issue = 8
| pages = 730–741
| doi = 10.1109/TPDS.2003.1225053
| arxiv = cs/0108016
| s2cid = 1641693
}}</ref>
 
=== Causal consistency ===
 
[[Causal consistency]]<ref name="Replica"/> defined by Hutto and Ahamad, 1990,<ref name="slow"/> is a weakening model of the sequential consistency model by categorizing events into those causally related and those that are not. It defines that only write operations that are causally related, need to be seen in the same order by all processes. For example, if an event b takes effect from an earlier event a, the causal consistency guarantees that all processes see event b after event a. Tanenbaum et al., 2007 provide a stricter definition, that a data store is considered causally consistent under the following conditions:<ref name="Replica"/>
 
* Writes that are potentially causally related must be seen by all processes in the same order.
* Concurrent writes may be seen in a different order on different machines.
 
This model relaxes sequential consistency on concurrent writes by a processor and on writes that are not causally related. Two writes can become causally related if one write to a variable is dependent on a previous write to any variable if the processor doing the second write has just read the first write. The two writes could have been done by the same processor or by different processors.
 
As in sequential consistency, reads do not need to reflect changes instantaneously, however, they need to reflect all changes to a variable sequentially.
{| class="wikitable"
! Sequence
! {{abbr|P|Processor}}1
! {{abbr|P|Processor}}2
|-
! 1
| W<sub>1</sub>(''x'')3
|
|-
! 2
|
| W<sub>2</sub>(x)5
|-
! 3
|
| R<sub>1</sub>(''x'')3
|}
 
W<sub>1</sub> is not causally related to W<sub>2</sub>. R1 would be ''sequentially inconsistent'' but is ''causally consistent''.{{clarify|reason=(Possibly wrong citation and wrong answer for sequentially inconsistent, someone check)|date=March 2019}}<ref name=":0" />
{| class="wikitable"
|-
Line 162 ⟶ 194:
|}
 
W(x)2 happens after W(x)1 due to the read made by P2 to x before W(x)2, hence this example is ''causally consistent'' under Hutto and Ahamad's definition (although not under Tanenbaum et al.'s, because W(x)2 and W(x)3 are not seen in the same order for all processes). However R(x)2 and R(x)3 happen in a different order on P3 and P4, hence this example is ''sequentially inconsistent''.<ref name=":0" />
W(x)1 and W(x)2 are causally related due to the read made by P2 to x before W(x)2.<ref name=":0" />
 
=== Processor consistency ===
 
In order for consistency in data to be maintained and to attain scalable processor systems where every processor has its own memory, the [[processor consistency]] model was derived.<ref name=":0">{{Cite web|url=https://www-vs.informatik.uni-ulm.de/teach/ss05/dsm/arizona.pdf|title=Memory Consistency Models|access-date=2016-11-17|archive-date=2016-03-03|archive-url=https://web.archive.org/web/20160303220106/https://www-vs.informatik.uni-ulm.de/teach/ss05/dsm/arizona.pdf|url-status=dead}}</ref> All processors need to be consistent in the order in which they see writes done by one processor and in the way they see writes by different processors to the same ___location (coherence is maintained). However, they do not need to be consistent when the writes are by different processors to different locations.
 
Every write operation can be divided into several sub-writes to all memories. A read from one such memory can happen before the write to this memory completes. Therefore, the data read can be stale. Thus, a processor under PC can execute a younger load when an older store needs to be stalled. Read before write, read after read and write before write ordering is still preserved in this model.
Line 175 ⟶ 207:
| date = 1991
| journal = IEEE Scalable Coherent Interface (SCI) Working Group
}}</ref> is similar to the [[PRAM consistency]] model with a stronger condition that defines all writes to the same memory ___location must be seen in the same sequential order by all other processes. Processor consistency is weaker than sequential consistency but stronger than the PRAM consistency model.
 
The [[Stanford DASH|Stanford DASH multiprocessor system]] implements a variation of processor consistency which is incomparable (neither weaker nor stronger) to Goodman's definitions.<ref name="senftleben13" /> All processors need to be consistent in the order in which they see writes by one processor and in the way they see writes by different processors to the same ___location. However, they do not need to be consistent when the writes are by different processors to different locations.
Line 181 ⟶ 213:
=== Pipelined RAM consistency, or FIFO consistency ===
 
[[PRAM consistency|Pipelined RAM consistency]] (PRAM consistency) was presented by Lipton and Sandberg in 1988<ref name="pram_lipton">{{cite techreporttech report|first=R.J.|last=Lipton|author2=J.S. Sandberg.|title=PRAM: A scalable shared memory|number=CS-TR-180-88|institution=Princeton University|year=1988}}</ref> as one of the first described consistency models. Due to its informal definition, there are in fact at least two subtly different implementations,<ref name="senftleben13">{{cite thesis|degree=M.Sc.|last=Senftleben|first=Maximilian|date=2013|title=Operational Characterization of Weak Memory Consistency Models|publisher=University of Kaiserslautern|url=https://es.cs.uni-kl.de/publications/datarsg/Senf13.pdf}}</ref> one by Ahamad et al. and one by Mosberger.
 
In PRAM consistency, all processes view the operations of a single process in the same order that they were issued by that process, while operations issued by different processes can be viewed in different order from different processes. PRAM consistency is weaker than processor consistency. PRAM relaxes the need to maintain coherence to a ___location across all its processors. Here, reads to any variable can be executed before writes in a processor. Read before write, read after read and write before write ordering is still preserved in this model.
Line 224 ⟶ 256:
 
=== Cache consistency ===
Cache consistency<ref name="cacheconsistency"/><ref name="unified"/> requires that all write operations to the same memory ___location are performed in some sequential order. Cache consistency is weaker than processor consistency and incomparable with PRAM consistency.
 
Cache consistency<ref name="cacheconsistency"/><ref name="unified"/> requires that all write operations to the same memory ___location are performed in some sequential order. Cache consistency is weaker than process consistency and incomparable with PRAM consistency.
 
=== Slow consistency ===
Line 237 ⟶ 268:
Hutto, Phillip W., and Mustaque Ahamad (1990)<ref name="slow">{{cite book
|author1=Hutto, Phillip W. |author2=Mustaque Ahamad
|title=Proceedings.,10th International Conference on Distributed Computing Systems
| title = Slow memory: Weakening consistency to enhance concurrency in distributed shared memories.
|chapter=Slow memory: Weakening consistency to enhance concurrency in distributed shared memories
| date = 1990
| journalpublisher = IEEE
| pages = 302–309
| doi = 10.1109/ICDCS.1990.89297
| isbn = 978-0-8186-2048-5
|s2cid=798676
}}</ref> illustrate that by appropriate programming, slow memory (consistency) can be expressive and efficient. They mention that slow memory has two valuable properties; locality and supporting reduction from atomic memory. They propose two algorithms to present the expressiveness of slow memory.
 
== Session guarantees ==
 
These 4 consistency models were proposed in a 1994 paper. They focus on guarantees in the situation where only a single user or application is making data modifications.<ref>{{cite book |last1=Terry |first1=Douglas B. |last2=Demers |first2=Alan J. |last3=Petersen |first3=Karin |last4=Spreitzer |first4=Mike J. |last5=Theimer |first5=Marvin M. |last6=Welch |first6=Brent B. |title=Proceedings of 3rd International Conference on Parallel and Distributed Information Systems |chapter=Session guarantees for weakly consistent replicated data |date=1 October 1994 |pages=140–150 |chapter-url=https://www.cs.cornell.edu/courses/cs734/2000FA/cached%20papers/SessionGuaranteesPDIS_1.html |publisher=IEEE Computer Society Press|doi=10.1109/PDIS.1994.331722 |isbn=0-8186-6400-2 |s2cid=2807044 }}</ref>
 
=== Monotonic read consistency ===
If a process reads the value of a data item x, any successive read operation on x by that process will always return that same value or a more recent value.<ref name="Replica" />
 
=== Monotonic write consistency ===
 
A write operation by a process on a data item X is completed before any successive write operation on X by the same process.<ref name="Replica" />
 
=== Read-your-writes consistency ===
 
A value written by a process on a data item X will always be available to a successive read operation performed by the same process on data item X.<ref name="Replica"/>
 
=== Writes-follows-reads consistency ===
 
A write operation by a process on a data item x following a previous read operation on x by the same process is guaranteed to take place on the same or a more recent value of x that was read.<ref name="Replica" />
 
== Weak memory consistency models ==
 
The following models require specific synchronization by programmers.
 
==== Weak ordering ====
 
Weak ordering classifies memory operations into two categories: ''data operations'' and ''synchronization operations''. To enforce program order, a programmer needs to find at least one synchronisation operation in a program. Synchronization operations signal the processor to make sure it has completed and seen all previous operations done by all processors. Program order and atomicity is maintained only on a group ofsynchronisation operations and not on all reads and writes. This was derived from the understanding that certain memory operations – such as those conducted in a critical section - need not be seen by all processors until after all operations in the critical section are completed for instance. It alsoassumes reordering memory operations to data regions between synchronisation operations does not affect the outcome of the program. This exploits the fact that programs written to be executed on a multi-processor system contain the required synchronization to make sure that data races do not occur and SC outcomes are produced always. Thus, in weak ordering, operations other than synchronization operations can be classified as ''data'' operations.<ref>{{Cite web|url=http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf|title=Shared Memory Consistency Models : A tutorial|access-date=2008-05-28|archive-date=2008-09-07|archive-url=https://web.archive.org/web/20080907125534/http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf|url-status=dead}}</ref>
{| class="wikitable"
|-
Line 271 ⟶ 325:
|}
 
Coherence is not relaxed in this model. Once these requirements are met, all other "data" operations can be reordered. The way this works is that a counter tracks the number of data operations and until this counter becomes zero, the synchronisation operation isn't issued. Furthermore, no more data operations are issued unless all the previous synchronisations are completed. Memory operations in between two synchronisation variables can be overlapped and reordered without affecting the correctness of the program. This model ensures that write atomicity is always maintained, therefore no additional safety net is required for weak ordering.
Synchronization operations signal the processor to make sure it has completed and seen all previous operations done by all processors. In order to maintain weak ordering, write operations prior to a synchronization operation must be globally performed before the synchronization operation. Operations present after a synchronization operation should also be performed only after the synchronization operation completes. Therefore, accesses to synchronization variables is sequentially consistent and any read or write should be performed only after previous synchronization operations have completed. Coherence is not relaxed in this model. Once these requirements are met, all other "data" operations can be reordered.
 
In order to maintain weak ordering, write operations prior to a synchronization operation must be globally performed before the synchronization operation. Operations present after a synchronization operation should also be performed only after the synchronization operation completes. Therefore, accesses to synchronization variables is sequentially consistent and any read or write should be performed only after previous synchronization operations have completed.
 
There is high reliance on explicit synchronization in the program. For weak ordering models, the programmer must use atomic locking instructions such as test-and-set, fetch-and-op, store conditional, load linked or must label synchronization variables or use fences.
 
===Release consistency===
The [[release consistency]] model relaxes the weak consistency model by distinguishing the entrance synchronization operation from the exit synchronization operation. Under weak ordering, when a synchronization operation is to be seen, all operations in all processors need to be visible before the synchronization operation is done and the processor proceeds. However, under the release consistency model, during the entry to a critical section, termed as "acquire", all operations with respect to the local memory variables need to be completed. During the exit, termed as "release", all changes made by the local processor should be propagated to all other processors. Coherence is still maintained.
 
The acquire operation is a load/read that is performed to access the critical section. A release operation is a store/write performed to allow other processors to use the shared variables.
 
Among synchronization variables, sequential consistency or processor consistency can be maintained. Using SC, all competing synchronization variables should be processed in order. However, with PC, a pair of competing variables need to only follow this order. Younger acquires can be allowed to happen before older releases.<ref>{{Cite bookcn|titledate=FundamentalsDecember of Parallel Computer Architecture|last=Solihin|first=Yan|publisher=Solihin Books|year=20092023}}</ref>
 
==== RCsc and RCpc ====
 
There are two types of release consistency, release consistency with sequential consistency (RCsc) and release consistency with processor consistency (RCpc). The latter type denotes which type of consistency is applied to those operations nominated below as special.
 
There are special (cf. ordinary) memory operations, themselves consisting of two classes of operations: ''sync'' or ''nsync'' operations. The latter are operations not used for synchronisation; the former are, and consist of ''acquire'' and ''release'' operations. An acquire is effectively a read memory operation used to obtain access to a certain set of shared locations. Release, on the other hand, is a write operation that is performed for granting permission to access the shared locations.
 
For sequential consistency (RCsc), the constraints are:
* acquire → all,
* all → release,
* special → special.
 
For processor consistency (RCpc) the write to read program order is relaxed, having constraints:
* acquire → all,
* all → release,
* special → special (except when special write is followed by special read).
 
Note: the above notation A → B, implies that if the operation A precedes B in the program order, then program order is enforced.
 
===Entry consistency ===
This is a variant of the release consistency model. It also requires the use of ''acquire'' and ''release'' instructions to explicitly state an entry or exit to a critical section. However, under entry consistency, every shared variable is assigned a synchronization variable specific to it. This way, only when the acquire is to variable x, all operations related to x need to be completed with respect to that processor. This allows concurrent operations of different critical sections of different shared variables to occur. Concurrency cannot be seen for critical operations on the same shared variable. Such a consistency model will be useful when different matrix elements can be processed at the same time.
 
=== Local consistency ===
 
In local consistency,<ref name="unified"/> each process performs its own operations in the order defined by its program. There is no constraint on the ordering in which the write operations of other processes appear to be performed. Local consistency is the weakest consistency model in shared memory systems.
 
=== General consistency ===
Line 297 ⟶ 375:
}}</ref> all the copies of a memory ___location are eventually identical after all processes' writes are completed.
 
=== LocalEventual consistency ===
 
An [[eventual consistency]]<ref name="Replica"/> is a weak consistency model in a system with the lack of simultaneous updates. It defines that if no update takes a very long time, all replicas eventually become consistent.
In local consistency,<ref name="unified"/> each process performs its own operations in the order defined by its program. There is no constraint on the ordering in which the write operations of other processes appear to be performed. Local consistency is the weakest consistency model in shared memory systems.
 
Most shared decentralized databases have an eventual consistency model, either BASE: basically available; soft state; eventually consistent, or a combination of [[ACID (computer science)|ACID]] and BASE sometimes called SALT: sequential; agreed; ledgered; tamper-resistant, and also symmetric; admin-free; ledgered; and time-consensual.<ref>Collin Cusce.
=== Other consistency models ===
[https://medium.com/@collin.cusce/blockchain-salt-a-descriptive-model-b19c973fef5f "SALT: A Descriptive Model For Blockchain"] {{Webarchive|url=https://web.archive.org/web/20200803201126/https://medium.com/@collin.cusce/blockchain-salt-a-descriptive-model-b19c973fef5f |date=2020-08-03 }}.
 
2018.</ref><ref>
Some other consistency models are as follows:
Stefan Tai, Jacob Eberhardt, and Markus Klems.
* Causal+ consistency<ref>{{cite web
[http://www.ise.tu-berlin.de/fileadmin/fg308/publications/2017/2017-tai-eberhardt-klems-SALT.pdf "Not ACID, not BASE, but SALT: A Transaction Processing Perspective on Blockchains"].
| url=https://www.cs.cmu.edu/~dga/papers/cops-sosp2011.pdf
2017.
| title=Don't Settle for Eventual:Scalable Causal Consistency for Wide-Area Storage with COPS
</ref><ref>
| first1=Wyatt | last1=Lloyd
Chao Xie, Chunzhi Su, Manos Kapritsos, Yang Wang,
| first2=Michael | last2=Freedman
Navid Yaghmazadeh, Lorenzo Alvisi, Prince Mahajan.
| first3=Michael | last3=Kaminsky
[https://web.archive.org/web/20181019205538/https://pdfs.semanticscholar.org/d2ed/d36bc2ce58f26132980b5d609716d084c7b4.pdf "Salt: Combining ACID and BASE in a Distributed Database"].
| first4=David | last4=Andersen
</ref>
| work=Proceedings of the 23rd ACM Symposium on Operating Systems Principles (SOSP’11)
}}</ref><ref>{{cite book
| title=ChainReaction: a causal+ consistent datastore based on chain replication
| pages=85
| first1=Sérgio | last1=Almeida
| first2=João | last2=Leitão
| first3=Luís | last3=Rodrigues
| work=Proceedings of the 8th ACM European Conference on Computer Systems (EuroSys'13)
| doi=10.1145/2465351.2465361
| chapter=Chain ''Reaction''
| year=2013
| isbn=9781450319942
}}</ref>
* [[Delta consistency]]
* [[Eventual consistency]]
* [[Fork consistency]]
* [[One-copy serializability]]
* [[Serializability]]
* [[Vector-field consistency]]
* [[Weak consistency]]
* [[Strong consistency]]
 
Several other consistency models have been conceived to express restrictions with respect to ordering or visibility of operations, or to deal with specific fault assumptions.<ref>{{cite journal
| author1 = Paolo Viotti
| author2 = Marko Vukolic
| title = Consistency in Non-Transactional Distributed Storage Systems
| date = 2016
| journal = ACM Computing Surveys
| volume = 49
| issue = 1
| pages = 19:1–19:34
| doi = 10.1145/2926965
| arxiv = 1512.00168
}}</ref>
 
== Relaxed memory consistency models ==
Some different consistency models can be defined by relaxing one or more requirements in [[sequential consistency]] called relaxed consistency models.<ref name="RelaxedModel">{{citeCitation journal|last=Mankin |first=Jenny |title=CSG280: Parallel Computing Memory Consistency Models: A Survey in Past and Present Research |date=2007}}</ref> These consistency models do not provide memory consistency at the hardware level. In fact, the programmers are responsible for implementing the memory consistency by applying synchronization techniques. The above models are classified based on four criteria and are detailed further.
| author = Mankin, Jenny
| title = CSG280: Parallel Computing Memory Consistency Models: A Survey in Past and Present Research
| date = 2007
}}</ref> These consistency models do not provide memory consistency at the hardware level. In fact, the programmers are responsible for implementing the memory consistency by applying synchronization techniques. The above models are classified based on four criteria and are detailed further.
 
There are four comparisons to define the relaxed consistency:
Line 368 ⟶ 409:
| doi = 10.1145/1017460.1017464
| arxiv = cs/0208027
| s2cid = 3206071
}}</ref> Issue method provides sequential consistency simulation by defining the restrictions for processes to issue memory operations. Whereas, view method describes the visibility restrictions on the events order for processes.
}}</ref> Issue method provides sequential consistency simulation by defining the restrictions for processes to issue memory operations. Whereas, view method describes the visibility restrictions on the events order for processes.
; Relative model strength: Some consistency models are more restrictive than others. In other words, strict consistency models enforce more constraints as consistency requirements. The strength of a model can be defined by the program order or atomicity relaxations and the strength of models can also be compared. Some models are directly related if they apply same relaxations or more. On the other hand, the models that relax different requirements are not directly related.
; Relative model strength: Some consistency models are more restrictive than others. In other words, strict consistency models enforce more constraints as consistency requirements. The strength of a model can be defined by the program order or atomicity relaxations and the strength of models can also be compared. Some models are directly related if they apply the same relaxations or more. On the other hand, the models that relax different requirements are not directly related.
 
Sequential consistency has two requirements, program order and write atomicity. Different relaxed consistency models can be obtained by relaxing these requirements. This is done so that, along with relaxed constraints, the performance increases, but the programmer is responsible for implementing the memory consistency by applying synchronisation techniques and must have a good understanding of the hardware.
Line 378 ⟶ 420:
* Read to read and read to write program orders
 
=== RelaxationRelaxed modelswrite to read ===
 
The following models are some models of relaxed consistency:
 
==== Relaxed write to read ====
 
An approach to improving the performance at the hardware level is by relaxing the PO of a write followed by a read which effectively hides the latency of write operations. The optimisation this type of relaxation relies on is that it allows the subsequent reads to be in a relaxed order with respect to the previous writes from the processor. Because of this relaxation some programs like XXX may fail to give SC results because of this relaxation. Whereas, programs like YYY are still expected to give consistent results because of the enforcement of the remaining program order constraints.
 
Three models fall under this category. The IBM 370 model is the strictest model. A read can be complete before an earlier write to a different address, but it is prohibited from returning the value of the write unless all the processors have seen the write. The SPARC V8 total store ordering model (TSO) model partially relaxes the IBM 370 Model, it allows a read to return the value of its own processor's write with respect to other writes to the same ___location i.e. it returns the value of its own write before others see it. Similar to the previous model, this cannot return the value of write unless all the processors have seen the write. The processor consistency model (PC) is the most relaxed of the three models and relaxes both the constraints such that a read can complete before an earlier write even before it is made visible to other processors.
 
In Example A, the result is possible only in IBM 370 because read(A) is not issued until the write(A) in that processor is completed. On the other hand, this result is possible in TSO and PC because they allow the reads of the flags before the writes of the flags in a single processor.
Line 435 ⟶ 473:
|}
 
==== Relaxed write to read and write to write ====
 
Some models relax the program order even further by relaxing even the ordering constraints between writes to different locations. The SPARC V8 partial store ordering model (PSO) is the only example of such a model. The ability to pipeline and overlap writes to different locations from the same processor is the key hardware optimisation enabled by PSO. PSO is similar to TSO in terms of atomicity requirements, in that, it allows a processor to read the value of its own write and preventingprevents other processors from reading another processor's write before the write is visible to all other processors. Program order between two writes is maintained by PSO using an explicit STBAR instruction. The STBAR is inserted in a write buffer in implementations with FIFO write buffers. A counter is used to determine when all the writes before the STBAR instruction have been completed, which triggers a write to the memory system to increment the counter. A write acknowledgement decrements the counter, and when the counter becomes 0, it signifies that all the previous writes are completed.
 
In the examples A and B, PSO allows both these non-sequentially consistent results. The safety net that PSO provides is similar to TSO's, it imposes program order from a write to a read and enforces write atomicity.
Line 443 ⟶ 481:
Similar to the previous models, the relaxations allowed by PSO are not sufficiently flexible to be useful for compiler optimisation, which requires a much more flexible optimisation.
 
==== Relaxing read and read to write program orders: Alpha, RMO, and PowerPC ====
 
In some models, all operations to different locations are relaxed. A read or write may be reordered with respect to a different read or write in a different ___location. The ''weak ordering'' may be classified under this category and two types of release consistency models (RCsc and RCpc) also come under this model. Three commercial architectures are also proposed under this category of relaxation: the Digital Alpha, SPARC V9 relaxed memory order (RMO), and IBM PowerPC models. All these models allow reordering of reads to the same ___location, except the Digital Alpha. These models violate sequential order in examples A and B. An additional relaxation allowed in these models that is absent in the previous models is that memory operations following a read operation can be overlapped and reordered with respect to the read. All these models, expect the RCpc and PowerPC, allow a read to return the value of another processor's early write. From a programmer's perspective all these models must maintain the illusion of write atomicity even though they allow the processor to read its own write early.
 
These three commercial architectures exhibit explicit fence instructions as their safety nets. The Alpha model provides two types of fence instructions, ''memory barrier'' (MB) and ''write memory barrier'' (WMB). The MB operation can be used to maintain program order of any memory operation before the MB with a memory operation after the barrier. Similarly, the WMB maintains program order only among writes. The SPARC V9 RMO model provides a MEMBAR instruction which can be customised to order previous reads and writes with respect to future read and write operations. There is no need for using read-modify-writes to achieve this order because the MEMBAR instruction can be used to order a write with respect to a succeeding read. The PowerPC model uses a single fence instruction called the SYNC instruction. It is similar to the MB instruction, but with a little exception that reads can occur out of program order even if a SYNC is placed between two reads to the same ___location. This model also differs from Alpha and RMO in terms of atomicity. It allows a write to be seen earlier than a read's completion. A combination of read modify write operations may be required to make an illusion of write atomicity.
These models can be classified into two categories based on the type of safety net provided. Here, the necessity for carefully written programs is seen. The nature of the synchronization helps to categorize between weak ordering, RCsc, and RCpc models. The Alpha, RMO and PowerPC models provide fence instructions so that program order can be imposed between different memory operations.
 
RMO and PowerPC allow reordering of reads to the same ___location. These models violate sequential order in examples A and B. An additional relaxation allowed in these models is that memory operations following a read operation can be overlapped and reordered with respect to the read. Alpha and RMO allow a read to return the value of another processor's early write. From a programmer's perspective these models must maintain the illusion of write atomicity even though they allow the processor to read its own write early.
==== Weak ordering ====
 
== Transactional memory models ==
An example of a model that relaxes most of the above constraints (except reading others’ write early) is weak ordering. It classifies memory operations into two categories: ''data operations'' and ''synchronization operations''. To enforce program order, a programmer needs to find at least one synchronisation operation in a program. The assumption under which this works is that, reordering memory operations to data regions between synchronisation operations does not affect the outcome of the program. They just act as the safety net for enforcing program order. The way this works is that a counter tracks the number of data operations and until this counter becomes zero, the synchronisation operation isn't issued. Furthermore, no more data operations are issued unless all the previous synchronisations are completed. Memory operations in between two synchronisation variables can be overlapped and reordered without affecting the correctness of the program. This model ensures that write atomicity is always maintained, therefore no additional safety net is required for weak ordering.
Transactional memory model<ref name="RelaxedModel"/> is the combination of cache coherency and memory consistency models as a communication model for shared memory systems supported by software or hardware; a transactional memory model provides both memory consistency and cache coherency. A transaction is a sequence of operations executed by a process that transforms data from one consistent state to another. A transaction either commits when there is no conflict or aborts. In commits, all changes are visible to all other processes when a transaction is completed, while aborts discard all changes. Compared to relaxed consistency models, a transactional model is easier to use and can provide higher performance than a sequential consistency model.
 
==== ReleaseOther consistency: RCscmodels and RCpc ====
 
Some other consistency models are as follows:
There are two types of release consistency, release consistency with sequential consistency (RCsc) and release consistency with processor consistency (RCpc). The latter type denotes which type of consistency is applied to those operations nominated below as special.
* Causal+ consistency<ref>{{cite web
| url=https://www.cs.cmu.edu/~dga/papers/cops-sosp2011.pdf
| title=Don't Settle for Eventual:Scalable Causal Consistency for Wide-Area Storage with COPS
| first1=Wyatt | last1=Lloyd
| first2=Michael | last2=Freedman
| first3=Michael | last3=Kaminsky
| first4=David | last4=Andersen
| work=Proceedings of the 23rd ACM Symposium on Operating Systems Principles (SOSP’11)
}}</ref><ref>{{cite book
| pages=85–98
| first1=Sérgio | last1=Almeida
| first2=João | last2=Leitão
| first3=Luís | last3=Rodrigues
| title=Proceedings of the 8th ACM European Conference on Computer Systems
| chapter=ChainReaction: A causal+ consistent datastore based on chain replication
| doi=10.1145/2465351.2465361
| year=2013
| isbn=9781450319942
| s2cid=651196
}}</ref>
* Cross-Client Monotonicity<ref>{{cite web
| url=https://www.usenix.org/system/files/fast20-ganesan.pdf
| title=Strong and Efficient Consistency with Consistency-Aware Durability
| first1=Aishwarya | last1=Ganesan
| first2=Ramnatthan | last2=Alagappan
| first3=Andrea | last3=Arpaci-Dusseau
| first4=Remzi | last4=Arpaci-Dusseau
| work=18th USENIX Conference on File and Storage Technologies (FAST 20)
| year=2020
}}</ref>
* [[Delta consistency]]
* [[Fork consistency]]
* [[One-copy serializability]]
* [[Serializability]]
* [[Vector-field consistency]]
* [[Weak consistency]]
* [[Strong consistency]]
 
Several other consistency models have been conceived to express restrictions with respect to ordering or visibility of operations, or to deal with specific fault assumptions.<ref>{{cite journal
There are special (cf. ordinary) memory operations, themselves consisting of two classes of operations: ''sync'' or ''nsync'' operations. The latter are operations not used for synchronisation; the former are, and consist of ''acquire'' and ''release'' operations. An acquire is effectively a read memory operation used to obtain access to a certain set of shared locations. Release, on the other hand, is a write operation that is performed for granting permission to access the shared locations.
| author1 = Paolo Viotti
 
| author2 = Marko Vukolic
For sequential consistency (RCsc), the constraints are:
| title = Consistency in Non-Transactional Distributed Storage Systems
* acquire → all,
| date = 2016
* all → release,
| journal = ACM Computing Surveys
* special → special.
| volume = 49
 
| issue = 1
For processor consistency (RCpc) the write to read program order is relaxed, having constraints:
| pages = 19:1–19:34
* acquire → all,
| doi = 10.1145/2926965
* all → release,
| arxiv = 1512.00168
* special → special (expect when special write is followed by special read).
| s2cid = 118557
 
}}</ref>
Note: the above notation A → B, implies that if the operation A precedes B in the program order, then program order is enforced.
 
==== Alpha, RMO, and PowerPC ====
 
These three commercial architectures exhibit explicit fence instructions as their safety nets. The Alpha model provides two types of fence instructions, ''memory barrier'' (MB) and ''write memory barrier'' (WMB). The MB operation can be used to maintain program order of any memory operation before the MB with a memory operation after the barrier. Similarly, the WMB maintains program order only among writes. The SPARC V9 RMO model provides a MEMBAR instruction which can be customised to order previous reads and writes with respect to future read and write operations. There is no need for using read-modify-writes to achieve this order because the MEMBAR instruction can be used to order a write with respect to a succeeding read. The PowerPC model uses a single fence instruction called the SYNC instruction. It is similar to the MB instruction, but with a little exception that reads can occur out of program order even if a SYNC is placed between two reads to the same ___location. This model also differs from Alpha and RMO in terms of atomicity. It allows a write to be seen earlier than a read's completion. A combination of read modify write operations may be required to make an illusion of write atomicity.
 
== Transactional memory models ==
Transactional memory model<ref name="RelaxedModel"/> is the combination of cache coherency and memory consistency models as a communication model for shared memory systems supported by software or hardware; a transactional memory model provides both memory consistency and cache coherency. A transaction is a sequence of operations executed by a process that transforms data from one consistent state to another. A transaction either commits when there is no conflict or aborts. In commits, all changes are visible to all other processes when a transaction is completed, while aborts discard all changes. Compared to relaxed consistency models, a transactional model is easier to use and can provide the higher performance than a sequential consistency model.
 
== Consistency and replication ==
Line 490 ⟶ 559:
=== Data-centric consistency models ===
 
Tanenbaum et al., 2007<ref name="Replica"/> defines the '''consistency model''' as a contract between the software (processes) and memory implementation (data store). This model guarantees that if the software follows certain rules, the memory works correctly. Since, in a system without a global clock, defining the last operation among writes is difficult, some restrictions can be applied on the values that can be returned by a read operation. The goal of data-centric consistency models is to provide a consistent view on a data store where processes may carry out concurrent updates.
 
==== Consistent ordering of operations ====
Some consistency models such as sequential and also causal consistency models deal with the order of operations on shared replicated data in order to provide consistency. In these models, all replicas must agree on a consistent global ordering of updates.
 
===== SequentialGrouping consistencyoperations =====
 
The goal of data-centric consistency models is to provide a consistent view on a data store where processes may carry out concurrent updates. One important data-centric consistency model is [[sequential consistency]] defined by Lamport (1979).<ref name="Lamport">{{cite journal
| author = Lamport, Leslie
| title = How to make a multiprocessor computer that correctly executes multiprocess programs.
| date = Sep 1979
| journal = IEEE Transactions on Computers
| volume = C-28
| issue = 9
| pages = 690–691
| doi = 10.1109/TC.1979.1675439
}}</ref> Tanenbaum et al., 2007<ref name="Replica"/> defines [[sequential consistency]] under following condition:
 
The result of any execution is the same as if the (read and write) operations by all processes on the data store were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program.<ref name="Replica"/>
 
Adve and Gharachorloo, 1996<ref name="sharedmemory">{{cite journal
| author1 = Sarita V. Adve
| author2 = Kourosh Gharachorloo
| title = Shared Memory Consistency Models: A Tutorial
| date = December 1996
| journal = IEEE Computer
| volume = 29
| issue = 12
| pages = 66–76
| url = http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf
| access-date = 2008-05-28
| doi = 10.1109/2.546611
| citeseerx = 10.1.1.36.8566
}}</ref> define two requirements to implement the sequential consistency; program order and write atomicity.
* Program order: Program order guarantees that each process issues a memory request ordered by its program.
* Write atomicity: Write atomicity defines that memory requests are serviced based on the order of a single FIFO queue.
 
In sequential consistency, there is no notion of time or most recent write operations. There are some operations interleaving that is same for all processes. A process can see the write operations of all processes but it can just see its own read operations.
 
[[Linearizability]]<ref name="Linear">{{cite journal
| author1 = Herlihy, Maurice P.
| author2 = Jeannette M. Wing
| title = "Linearizability: A correctness condition for concurrent objects." ACM Transactions on Programming Languages and Systems
| date = July 1990
| journal = ACM Transactions on Programming Languages and Systems
| volume = 12
| issue = 3
| pages = 463–492
| doi = 10.1145/78969.78972
| citeseerx = 10.1.1.142.5315
}}</ref> (Atomic memory)<ref name="RelaxedModel"/> can be defined as a sequential consistency with real time constraint by considering a begin time and end time for each operation. An execution is linearizable if each operation taking place in linearizable order by placing a point between its begin time and its end time and guarantees sequential consistency.
 
===== Causal consistency =====
 
The [[causal consistency]]<ref name="Replica"/> defined by Hutto and Ahamad, 1990<ref name="slow"/> is a weaker consistency model than sequential consistency by making the distinction between causally related operations and those that are not related. For example, if an event b takes effect from an earlier event a, the causal consistency guarantees that all processes see event b after event a.
 
Tanenbaum et al., 2007<ref name="Replica"/> defines that a data store is considered causal consistent under the following condition:
 
Writes that are potentially causally related must be seen by all processes in the same order. Concurrent writes may be seen in a different order on different machines.<ref name="Replica"/>
 
==== Eventual consistency ====
 
An [[eventual consistency]]<ref name="Replica"/> is a weak consistency model in the system with the lack of simultaneous updates. It defines that if no update takes a very long time, all replicas eventually become consistent.
 
Most shared decentralized databases have an eventual consistency model, either BASE: basically available; soft state; eventually consistent, or a combination of [[ACID (computer science)|ACID]] and BASE sometimes called SALT: sequential; agreed; ledgered; tamper-resistant, and also symmetric; admin-free; ledgered; and time-consensual.<ref>
Collin Cusce.
[https://medium.com/@collin.cusce/blockchain-salt-a-descriptive-model-b19c973fef5f "SALT: A Descriptive Model For Blockchain"].
2018.
</ref><ref>
Stefan Tai, Jacob Eberhardt, and Markus Klems.
[http://www.ise.tu-berlin.de/fileadmin/fg308/publications/2017/2017-tai-eberhardt-klems-SALT.pdf "Not ACID, not BASE, but SALT: A Transaction Processing Perspective on Blockchains"].
2017.
</ref><ref>
Chao Xie, Chunzhi Su, Manos Kapritsos, Yang Wang,
Navid Yaghmazadeh, Lorenzo Alvisi, Prince Mahajan.
[https://pdfs.semanticscholar.org/d2ed/d36bc2ce58f26132980b5d609716d084c7b4.pdf "Salt: Combining ACID and BASE in a Distributed Database"].
</ref>
 
===== Grouping operations<ref name="Replica"/> =====
 
In grouping operation, accesses to the synchronization variables are sequentially consistent. A process is allowed to access a synchronization variable that all previous writes have been completed. In other words, accesses to synchronization variables are not permitted until all operations on the synchronization variables are completely performed.
 
==== Continuous consistency ====
 
The continuous consistency is defined later in the consistency protocol section.
 
=== Client-centric consistency models<ref name="Replica"/> ===
 
In distributed systems, maintaining [[sequential consistency]] in order to control the concurrent operations is essential. In some special data stores without simultaneous updates, client-centric consistency models can deal with inconsistencies in a less costly way. The following models are some client-centric consistency models:
 
==== Monotonic read consistency ====
Tanenbaum et al., 2007<ref name="Replica"/> defines monotonic read consistency as follows:
 
"If a process reads the value of a data item x, any successive read operation on x by that process will always return that same value or a more recent value."<ref name="Replica"/>
 
Monotonic read consistency guarantees that after a process reads a value of data item x at time t, it will never see the older value of that data item.
 
==== Monotonic write consistency ====
 
Monotonic write consistency condition is defined by Tanenbaum et al., 2007<ref name="Replica"/> as follows:
 
"A write operation by a process on a data item X is completed before any successive write operation on X by the same process."<ref name="Replica"/>
 
==== Read-your-writes consistency ====
 
A value written by a process on a data item X will be always available to a successive read operation performed by the same process on data item X.<ref name="Replica"/>
 
In grouping operation, accesses to the synchronization variables are sequentially consistent. A process is allowed to access a synchronization variable that all previous writes have been completed. In other words, accesses to synchronization variables are not permitted until all operations on the synchronization variables are completely performed.<ref name="Replica"/>
==== Writes-follows-reads consistency ====
 
=== Client-centric consistency models ===
In writes-follow-reads consistency, updates are propagated after performing the previous read operations. Tanenbaum et al., 2007<ref name="Replica"/> defines the following condition for writes-follow-reads consistency:
 
In distributed systems, maintaining [[sequential consistency]] in order to control the concurrent operations is essential. In some special data stores without simultaneous updates, client-centric consistency models can deal with inconsistencies in a less costly way. The following models are some client-centric consistency models:<ref name="Replica"/>
"A write operation by a process on a data item x following a previous read operation on x by the same process is guaranteed to take place on the same or a more recent value of x that was read."<ref name="Replica"/>
 
=== Consistency protocols ===
Line 616 ⟶ 585:
| volume = 4
| pages = 21
}}</ref> In this model, the consistency semanticsemantics of an application is described by using conits in the application. Since the consistency requirements can differ based on application semantics, Yu and Vahdat (2000)<ref name="Continuous"/> believe that a predefined uniform consistency model may not be an appropriate approach. The application should specify the consistency requirements that satisfy the application semanticsemantics. In this model, an application specifies each consistency requirementsrequirement as a conitsconit (abbreviation of consistency units). A conit can be a physical or logical consistency and is used to measure the consistency. Tanenbaum et al., 2007<ref name="Replica"/> describes the notion of a conit by giving an example.
 
There are three inconsistencies that can be tolerated by applications.
 
; Deviation in numerical values:<ref name="Continuous"/> Numerical deviation bounds the difference between the conit value and the relative value of the last update. A weight can be assigned to the writes which defines the importance of the writes in a specific application. The total weights of unseen writes for a conit can be defined as a numerical deviation in an application. There are two different types of numerical deviation; absolute and relative numerical deviation.
; Deviation in ordering:<ref name="Continuous"/> Ordering deviation is the discrepancy between the local order of writes in a replica and their relative ordering in the eventual final image.
; Deviation in staleness between replicas:<ref name="Continuous"/> Staleness deviation defines the validity of the oldest write by bounding the difference between the current time and the time of the oldest write on a conit not seen locally. Each server has a local queue of uncertain write that is required an actual order to be determined and applied on a conit. The maximal length of uncertain writes queue is the bound of ordering deviation. When the number of writes exceeds the limit, instead of accepting new submitted write, the server will attempt to commit uncertain writes by communicating with other servers based on the order that writes should be executed.
 
If all three deviation bounds are set to zero, the continuous consistency model is the strong consistency.
 
==== Primary-based protocols ====
Line 636 ⟶ 605:
 
In the simplest primary-based protocol that supports replication, also known as primary-backup protocol, write operations are forwarded to a single server and read operations can be performed locally.
: '''Example:''' Tanenbaum et al., 2007<ref name="Replica"/> gives an example of a primary-backup protocol. The diagram of primary-backup protocol shows an example of this protocol. When a client requests a write, the write request is forwarded to a primary server. The primary server sends request to backups to perform the update. The server then receives the update acknowledgement from all backups and sends the acknowledgement of completion of writes to the client. Any client can read the last available update locally. The trade-off of this protocol is that a client who sends the update request might have to wait so long to get the acknowledgement in order to continue. This problem can be solved by performing the updates locally, and then askasking other backups to perform their updates. The non-blocking primary-backup protocol does not guarantee the consistency of update on all backup servers. However, it improves the performance. In the primary-backup protocol, all processes will see the same order of write operations since this protocol orders all incoming writes based on a globally unique time. Blocking protocols guarantee that processes view the result of the last write operation.
 
===== Local-write protocols =====
Line 649 ⟶ 618:
===== Active replication =====
 
In active replication,<ref name="Replica"/> there is a process associated towith each replica to perform the write operation. In other words, updates are sent to each replica in the form of an operation in order to be executed. All updates need to be performed in the same order in all replicas. As a result, a totally-ordered multicast mechanism is required. There is a scalability issue in implementing such a multicasting mechanism in large distributed systems. There is another approach in which each operation is sent to a central coordinator (sequencer). The coordinator first assigns a sequence number to each operation and then forwards the operation to all replicas. Second approach cannot also solve the scalability problem.
 
===== Quorum-based protocols<ref name="Replica"/> =====
 
Voting can be another approach in replicated-write protocols. In this approach, a client requests and receives permission from multiple servers in order to read and write a replicated data. As an example, suppose in a distributed file system, a file is replicated on N servers. To update a file, a client must send a request to at least {{nowrap|N/2 + 1}} in order to make their agreement to perform an update. After the agreement, changes are applied on the file and a new version number is assigned to the updated file. Similarly, for reading replicated file, a client sends a request to {{nowrap|N/2 + 1}} servers in order to receive the associated version number from those servers. Read operation is completed if all received version numbers are the most recent version.<ref name="Replica"/>
 
===== Cache-coherence protocols =====
Line 661 ⟶ 630:
Cache consistency models can differ in their coherence detection strategies that define when inconsistencies occur. There are two approaches to detect the inconsistency; static and dynamic solutions. In the static solution, a compiler determines which variables can cause the cache inconsistency. So, the compiler enforces an instruction in order to avoid the inconsistency problem. In the dynamic solution, the server checks for inconsistencies at run time to control the consistency of the cached data that has changed after it was cached.
 
The coherence enforcement strategy is another cache-coherence protocol. It defines that ''how'' to provide the consistency in caches by using the copies located on the server. One way to keep the data consistent is to never cache the shared data. A server can keep the data and apply some consistency protocol such as primary-based protocols to ensure the consistency of shared data. In this solution, only private data can be cached by clients. In the case that shared data areis cached, there are two approaches in order to enforce the cache coherence.
 
In the first approach, when a shared data is updated, the server forwards invalidation to all caches. In the second approach, an update is propagated. Most caching systems apply these two approaches or dynamically choose between them.
 
== See also ==
* [[{{annotated link|Cache coherence]]}}
* [[{{annotated link|Distributed shared memory]]}}
* [[{{annotated link|Non-uniform memory access]]}}
 
== References ==
Line 681 ⟶ 650:
| issue = 1
| pages = 19:1–19:34
| doi=10.1145/2926965|arxiv=1512.00168|s2cid=118557 }}
* {{Cite thesis |last=Sezgin |first=Ali |title=Formalization and Verification of Shared Memory |date=August 2004 |access-date=June 27, 2024 |degree=PhD |publisher=The University of Utah |url=https://users.cs.utah.edu/~ganesh/unpublished/sezgin_phd_04.pdf}} (contains many valuable references)
* {{cite journal
* {{Citation |last1=Yelick |first1=Katherine |title=A Proposal for a UPC Memory Consistency Model, v1.0 |date=2004 |url=http://www.gwu.edu/~upc/downloads/upcmem.pdf |archive-url=https://web.archive.org/web/20050520185509/http://www.gwu.edu/%7Eupc/downloads/upcmem.pdf |archive-date=20 May 2005 |url-status=dead |series=Lawrence Berkeley National Lab Tech Report |doi=10.2172/823757 |id=LBNL-54983 |last2=Bonachea |first2=Dan |last3=Wallace |first3=Charles}}
| author = Ali Sezgin
| title = Formalization and verification of shared memory
| year = 2004
| url = http://www.cs.utah.edu/~ganesh/unpublished/sezgin_phd_04.pdf
}} (contains many valuable references)
* {{cite journal
| author = Kathy Yelick
|author2=Dan Bonachea |author3=Chuck Wallace
| year = 2004
| title = A Proposal for a UPC Memory Consistency Model (v1.0)
| url = http://www.gwu.edu/~upc/downloads/upcmem.pdf
}}
* {{cite journal
| last = Mosberger
Line 706 ⟶ 664:
| url = http://citeseer.ist.psu.edu/mosberger93memory.html
| doi = 10.1145/160551.160553| citeseerx = 10.1.1.331.2924
| s2cid = 16123648
}}
*{{cite journal |author1=Sarita V. Adve |author2=Kourosh Gharachorloo |title=Shared Memory Consistency Models: A Tutorial |date=December 1996 |journal=IEEE Computer |volume=29 |issue=12 |pages=66–76 |url=http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf |access-date=2008-05-28 |doi=10.1109/2.546611 |citeseerx=10.1.1.36.8566 |archive-date=2008-09-07 |archive-url=https://web.archive.org/web/20080907125534/http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf |url-status=dead }}
*{{cite journal
|author1=Sarita V. Adve |author2=Kourosh Gharachorloo | title = Shared Memory Consistency Models: A Tutorial
|date=December 1996
| journal = IEEE Computer
| volume = 29
| issue = 12
| pages = 66–76
| url = http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf
| access-date = 2008-05-28
| doi=10.1109/2.546611|citeseerx=10.1.1.36.8566 }}
* {{cite journal
| last = Steinke
Line 728 ⟶ 678:
| pages = 800–849
| arxiv = cs.DC/0208027
| doi = 10.1145/1017460.1017464| s2cid = 3206071
}}
* Terry, Doug. "Replicated data consistency explained through baseball." Communications of the ACM 56.12 (2013): 82-89. https://www.microsoft.com/en-us/research/wp-content/uploads/2011/10/ConsistencyAndBaseballReport.pdf
 
== External links ==