Replication (computing): Difference between revisions

Content deleted Content added
m Reverted 1 edit by Void of Souls (talk) to last revision by Tule-hog
m Remove duplicated `n`
Line 125:
A weakness of primary-backup schemes is that only one is actually performing operations. Fault-tolerance is gained, but the identical backup system doubles the costs. For this reason, starting {{circa|1985}}, the distributed systems research community began to explore alternative methods of replicating data. An outgrowth of this work was the emergence of schemes in which a group of replicas could cooperate, with each process acting as a backup while also handling a share of the workload.
 
Computer scientist [[Jim Gray (computer scientist)|Jim Gray]] analyzed multi-primary replication schemes under the transactional model and published a widely cited paper skeptical of the approach "The Dangers of Replication and a Solution".<ref>[http://research.microsoft.com/~gray/replicas.ps "The Dangers of Replication and a Solution"]</ref><ref>''Proceedings of the 1999 ACM SIGMOD International Conference on Management of Data: SIGMOD '99'', Philadelphia, PA, US; June 1–3, 1999, Volume 28; p. 3.</ref> He argued that unless the data splits in some natural way so that the database can be treated as ''n'' {{math|n}} disjoint sub-databases, concurrency control conflicts will result in seriously degraded performance and the group of replicas will probably slow as a function of ''n''. Gray suggested that the most common approaches are likely to result in degradation that scales as ''O(n³)''. His solution, which is to partition the data, is only viable in situations where data actually has a natural partitioning key.
 
In the 1985–1987, the [[virtual synchrony]] model was proposed and emerged as a widely adopted standard (it was used in the Isis Toolkit, Horus, Transis, Ensemble, Totem, [[Spread Toolkit|Spread]], C-Ensemble, Phoenix and Quicksilver systems, and is the basis for the [[Common Object Request Broker Architecture|CORBA]] fault-tolerant computing standard). Virtual synchrony permits a multi-primary approach in which a group of processes cooperates to parallelize some aspects of request processing. The scheme can only be used for some forms of in-memory data, but can provide linear speedups in the size of the group.