Replication (computing): Difference between revisions

Content deleted Content added
m add WP:TEMPLATECAT to remove from template; genfixes
Citation bot (talk | contribs)
Alter: title, template type, pages. Add: url, isbn, page, chapter. Removed parameters. Formatted dashes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 366/492
 
(6 intermediate revisions by 4 users not shown)
Line 1:
{{short description|Sharing information to ensure consistency in computing}}
{{More footnotes needed|date=October 2012}}
'''Replication''' in [[computing]] refers to maintaining multiple copies of data, processes, or resources to ensure consistency across redundant components. This fundamental technique spans [[database management system|databases]], [[file system|file systems]], and [[distributed computing|distributed systems]], serving to improve [[high availability|availability]], [[fault-tolerance]], accessibility, and performance.<ref name="kleppmann"/> Through replication, systems can continue operating when components fail ([[failover]]), serve requests from geographically distributed locations, and balance load across multiple machines. The challenge lies in maintaining consistency between replicas while managing the fundamental tradeoffs between data consistency, system availability, and [[Network partition|network partition tolerance]] – constraints known as the [[CAP theorem]].<ref>{{cite book |last=Brewer |first=Eric A. |chapter=Towards robust distributed systems (Abstract) |page=7 |title=Proceedings of the nineteenth annual ACM symposium on Principles of distributed computing |year=2000 |doi=10.1145/343477.343502|isbn=1-58113-183-6 }}</ref>
'''Replication''' in [[computing]] involves sharing information so as to ensure consistency between redundant resources, such as [[software]] or [[computer hardware|hardware]] components, to improve reliability, [[fault-tolerance]], or accessibility.
 
== {{Anchor|MASTER-ELECTION}}Terminology ==
Line 12:
Replication in space or in time is often linked to scheduling algorithms.<ref>Mansouri, Najme, Gholam, Hosein Dastghaibyfard, and Ehsan Mansouri. "Combination of data replication and scheduling algorithm for improving data availability in Data Grids", ''Journal of Network and Computer Applications'' (2013)</ref>
 
Access to a replicated entity is typically uniform with access to a single non-replicated entity. The replication itself should be [[transparency (human-computer interaction)|transparent]] to an external user. In a failure scenario, a [[failover]] of replicas should be hidden as much as possible with respect to [[quality of service]].<ref>V. Andronikou, K. Mamouras, K. TserpesIzan, D. Kyriazis, T. Varvarigou, "Dynamic QoS-aware Data Replication in Grid Environments", ''Elsevier Future Generation Computer Systems - The International Journal of Grid Computing and eScience'', 2012</ref>
 
Computer scientists further describe replication as being either:
Line 31:
* '''Transactional replication''': used for replicating [[transactional data]], such as a database. The [[one-copy serializability]] model is employed, which defines valid outcomes of a transaction on replicated data in accordance with the overall [[ACID]] (atomicity, consistency, isolation, durability) properties that transactional systems seek to guarantee.
* '''[[State machine replication]]''': assumes that the replicated process is a [[deterministic finite automaton]] and that [[atomic broadcast]] of every event is possible. It is based on [[Consensus (computer science)|distributed consensus]] and has a great deal in common with the transactional replication model. This is sometimes mistakenly used as a synonym of active replication. State machine replication is usually implemented by a replicated log consisting of multiple subsequent rounds of the [[Paxos algorithm]]. This was popularized by Google's Chubby system, and is the core behind the open-source [[Keyspace (data store)|Keyspace data store]].<ref name=keyspace>{{cite web | access-date=2010-04-18 | year = 2009 | url=http://scalien.com/whitepapers |title=Keyspace: A Consistently Replicated, Highly-Available Key-Value Store | author=Marton Trencseni, Attila Gazso}}</ref><ref name=chubby>{{cite web | access-date=2010-04-18 | year=2006 | url=http://labs.google.com/papers/chubby.html | title=The Chubby Lock Service for Loosely-Coupled Distributed Systems | author=Mike Burrows | url-status=dead | archive-url=https://web.archive.org/web/20100209225931/http://labs.google.com/papers/chubby.html | archive-date=2010-02-09 }}</ref>
* '''[[Virtual synchrony]]''': involves a group of processes which cooperate to replicate in-memory data or to coordinate actions. The model defines a distributed entity called a ''process group''. A process can join a group and is provided with a checkpoint containing the current state of the data replicated by group members. Processes can then send [[multicast]]s to the group and will see incoming multicasts in the identical order. Membership changes are handled as a special multicast that delivers a new "membership view" to the processes in the group.<ref>{{Cite book |last1=Birman |first1=K. |last2=Joseph |first2=T. |title=Proceedings of the eleventh ACM Symposium on Operating systems principles - SOSP '87 |chapter=Exploiting virtual synchrony in distributed systems |date=1987-11-01 |chapter-url=https://doi.org/10.1145/41457.37515 |series=SOSP '87 |___location=New York, NY, USA |publisher=Association for Computing Machinery |pages=123–138 |doi=10.1145/41457.37515 |isbn=978-0-89791-242-6|s2cid=7739589 }}</ref>
 
== {{Anchor|DATABASE}}Database replication ==
[[Database]] replication involves maintaining copies of the same data on multiple machines, typically implemented through three main approaches: single-leader, multi-leader, and leaderless replication.<ref name="kleppmann">{{cite book |last=Kleppmann |first=Martin |title=Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems |year=2017 |publisher=O'Reilly Media |isbn=9781491903100 |pages=151–185}}</ref>
[[Database]] replication can be used on many [[database management system]]s (DBMS), usually with a [[master/slave (technology)|primary/replica]] relationship between the original and the copies. The primary logs the updates, which then ripple through to the replicas. Each replica outputs a message stating that it has received the update successfully, thus allowing the sending of subsequent updates.
 
In [[Master–slave (technology)|single-leader]] (also called primary/replica) replication, one database instance is designated as the leader (primary), which handles all write operations. The leader logs these updates, which then propagate to replica nodes. Each replica acknowledges receipt of updates, enabling subsequent write operations. Replicas primarily serve read requests, though they may serve stale data due to replication lag – the delay in propagating changes from the leader.
In [[multi-master replication]], updates can be submitted to any database node, and then ripple through to other servers. This is often desired but introduces substantially increased costs and complexity which may make it impractical in some situations. The most common challenge that exists in multi-master replication is transactional conflict prevention or [[conflict resolution|resolution]]. Most synchronous (or eager) replication solutions perform conflict prevention, while asynchronous (or lazy) solutions have to perform conflict resolution. For instance, if the same record is changed on two nodes simultaneously, an eager replication system would detect the conflict before confirming the commit and abort one of the transactions. A [[lazy replication]] system would allow both [[database transaction|transactions]] to commit and run a conflict resolution during re-synchronization.<ref>{{cite book
|title=ITTIA DB SQL™ User's Guide
|chapter=Replication -- Conflict Resolution
|chapter-url=http://www.ittia.com/html/ittia-db-docs/users-guide/replication.html#conflict-resolution
|publisher=ITTIA L.L.C.
|access-date=21 October 2016
|archive-date=24 November 2018
|archive-url=https://web.archive.org/web/20181124055015/http://www.ittia.com/html/ittia-db-docs/users-guide/replication.html}}</ref>
The resolution of such a conflict may be based on a [[timestamp]] of the transaction, on the hierarchy of the origin nodes or on much more complex logic, which decides consistently across all nodes.
 
In [[multi-master replication]] (also called multi-leader), updates can be submitted to any database node, which then propagate to other servers. This approach is particularly beneficial in multi-data center deployments, where it enables local write processing while masking inter-data center network latency.<ref name="kleppmann"/> However, it introduces substantially increased costs and complexity which may make it impractical in some situations. The most common challenge that exists in multi-master replication is transactional conflict prevention or [[conflict resolution|resolution]] when concurrent modifications occur on different leader nodes.
Database replication becomes more complex when it scales up [[horizontal scalability|horizontally]] and vertically. Horizontal scale-up has more data replicas, while vertical scale-up has data replicas located at greater physical distances. Problems raised by horizontal scale-up can be alleviated by a multi-layer, multi-view access [[network protocol|protocol]]. The early problems of vertical scale-up have largely been addressed by improving Internet [[Reliability (computer networking)|reliability]] and performance.<ref>{{cite web
| url = http://facta.junis.ni.ac.rs/eae/fu2k71/4obradovic.pdf
| title = Measurement of the Achieved Performance Levels of the WEB Applications With Distributed Relational Database
| work = Electronics and Energetics | volume = 20 | number = 1 | page = 31{{ndash}}43
| date = April 2007 | access-date = 30 January 2014
| author1 = Dragan Simic | author2 = Srecko Ristic | author3 = Slobodan Obradovic
| publisher = Facta Universitatis
}}</ref><ref>{{cite web
| url = http://oatao.univ-toulouse.fr/12933/1/Mokadem_12933.pdf
| title = Data Replication Strategies with Performance Objective in Data Grid Systems: A Survey
| work = Internal journal of grid and utility computing | volume = 6 | number = 1 | page = 30{{ndash}}46
| date = December 2014 | access-date = 18 December 2014
| author1 = Mokadem Riad | author2 = Hameurlain Abdelkader
| publisher = Underscience Publisher
}}</ref>
 
In [[multi-master replication]], updates can be submitted to any database node, and then ripple through to other servers. This is often desired but introduces substantially increased costs and complexity which may make it impractical in some situations. The most common challenge that exists in multi-master replication is transactional conflict prevention or [[conflict resolution|resolution]]. Most synchronous (or eager) replication solutions perform conflict prevention, while asynchronous (or lazy) solutions have to perform conflict resolution. For instance, if the same record is changed on two nodes simultaneously, an eager replication system would detect the conflict before confirming the commit and abort one of the transactions. A [[lazy replication]] system would allow both [[database transaction|transactions]] to commit and run a conflict resolution during re-synchronization. Conflict resolution methods can include techniques like last-write-wins, application-specific logic, or merging concurrent updates.<ref>{{cite bookname="kleppmann"/>
When data is replicated between database servers, so that the information remains consistent throughout the database system and users cannot tell or even know which server in the DBMS they are using, the system is said to exhibit replication transparency.
 
However, replication transparency can not always be achieved. When data is replicated in a database, they will be constrained by [[CAP theorem]] or [[PACELC theorem]]. In the NoSQL movement, data consistency is usually sacrificed in exchange for other more desired properties, such as availability (A), partition tolerance (P), etc. Various [[Consistency model|data consistency models]] have also been developed to serve as Service Level Agreement (SLA) between service providers and the users.
 
There are several techniques for replicating data changes between nodes:<ref name="kleppmann"/>
* '''Statement-based replication''': Write requests (such as SQL statements) are logged and transmitted to replicas for execution. This can be problematic with non-deterministic functions or statements having side effects.
* '''Write-ahead log (WAL) shipping''': The storage engine's low-level write-ahead log is replicated, ensuring identical data structures across nodes.
* '''Logical (row-based) replication''': Changes are described at the row level using a dedicated log format, providing greater flexibility and independence from storage engine internals.
 
== Disk storage replication ==
[[File:Storage replication-en.pngsvg|thumb|Storage replication]]
Active (real-time) storage replication is usually implemented by distributing updates of a [[block device]] to several physical [[hard disk]]s. This way, any [[file system]] supported by the [[operating system]] can be replicated without modification, as the file system code works on a level above the block device driver layer. It is implemented either in hardware (in a [[disk array controller]]) or in software (in a [[device driver]]).
 
Line 133 ⟶ 116:
Modern multi-primary replication protocols optimize for the common failure-free operation. Chain replication<ref>{{Cite journal |last1=van Renesse |first1=Robbert |last2=Schneider |first2=Fred B. |date=2004-12-06 |title=Chain replication for supporting high throughput and availability |url=https://dl.acm.org/doi/abs/10.5555/1251254.1251261 |journal=Proceedings of the 6th Conference on Symposium on Operating Systems Design & Implementation - Volume 6 |series=OSDI'04 |___location=USA |publisher=USENIX Association |pages=7 |doi=}}</ref> is a  popular family of such protocols. State-of-the-art protocol variants<ref>{{Cite journal |last1=Terrace |first1=Jeff |last2=Freedman |first2=Michael J. |date=2009-06-14 |title=Object storage on CRAQ: high-throughput chain replication for read-mostly workloads |url=https://dl.acm.org/doi/abs/10.5555/1855807.1855818 |journal=USENIX Annual Technical Conference |series=USENIX'09 |___location=USA |pages=11 |doi=}}</ref> of chain replication offer high throughput and strong consistency by arranging replicas in a chain for writes. This approach enables local reads on all replica nodes but has high latency for writes that must traverse multiple nodes sequentially.
 
A more recent multi-primary protocol, [https://hermes-protocol.com/ Hermes],<ref>{{Cite book |last1=Katsarakis |first1=Antonios |last2=Gavrielatos |first2=Vasilis |last3=Katebzadeh |first3=M.R. Siavash |last4=Joshi |first4=Arpit |last5=Dragojevic |first5=Aleksandar |last6=Grot |first6=Boris |last7=Nagarajan |first7=Vijay |title=Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems |chapter=Hermes: A Fast, Fault-Tolerant and Linearizable Replication Protocol |date=2020-03-13 |chapter-url=https://doi.org/10.1145/3373376.3378496 |series=ASPLOS '20 |___location=New York, NY, USA |publisher=Association for Computing Machinery |pages=201–217 |doi=10.1145/3373376.3378496 |hdl=20.500.11820/c8bd74e1-5612-4b81-87fe-175c1823d693 |isbn=978-1-4503-7102-5|s2cid=210921224 |url=https://www.pure.ed.ac.uk/ws/files/130434070/Hermes_a_Fast_KATASARAKIS_DOA02122019_AFV.pdf }}</ref> combines cache-coherent-inspired invalidations and logical timestamps to achieve strong consistency with local reads and high-performance writes from all replicas. During fault-free operation, its broadcast-based writes are non-conflicting and commit after just one multicast round-trip to replica nodes. This design results in high throughput and low latency for both reads and writes.
 
==See also==
Line 153 ⟶ 136:
[[Category:Data synchronization]]
[[Category:Fault-tolerant computer systems]]
[[Category:Database management systems]]