Content deleted Content added
No edit summary |
m →External links: HTTP to HTTPS for SourceForge |
||
(48 intermediate revisions by 36 users not shown) | |||
Line 1:
{{
{{Refimprove|date=November 2009}}
In [[computer science]], a '''concurrent data structure''' (also called '''shared data structure''') is a data structure designed for access and modification by multiple computing [[Thread (computer science)|threads]] (or [[process (computing)|processes]] or nodes) on a computer, for example concurrent [[Message queue|queues]], concurrent [[Stack (abstract data type)|stacks]] etc. The concurrent data structure is typically considered to reside in an abstract storage environment known as shared memory, which may be physically implemented as either a tightly coupled or a distributed collection of storage modules. <ref>{{Cite book |title=A VLSI Architecture for Concurrent Data Structures |isbn=9781461319955 |last1=Dally |first1=J. W. |date=6 December 2012 |publisher=Springer }}</ref><ref>{{Cite book |title=23nd International Symposium on Distributed Computing, DISC |publisher=Springer Science & Business Media |year=2009}}</ref>
▲==Basic principles==
Concurrent data structures, intended for use in
parallel or distributed computing environments, differ from
"sequential" data structures, intended for use on a
machine, in several ways.<ref name="sahni">
{{cite book
|
| title =
| chapter =
|chapter-url=http://www.cs.tau.ac.il/~shanir/concurrent-data-structures.pdf
|archive-url=https://web.archive.org/web/20110401070433/http://www.cs.tau.ac.il/~shanir/concurrent-data-structures.pdf
| editor = Dinesh Metha and [[Sartaj Sahni]]▼
|archive-date=2011-04-01
| publisher = Chapman and Hall/CRC Press
| year = 2007
| pages = 47-
}}
</ref>
one specifies the data structure's properties and checks that they
are implemented correctly, by providing '''safety properties'''. In
Line 50 ⟶ 32:
The type of liveness requirements tend to define the data structure.
The [[method (computer science)|method]] calls can be
restricted to one type or the other, and can allow combinations
where some method calls are blocking and others are non-blocking
Line 64 ⟶ 45:
Therefore, many mainstream approaches for arguing the safety properties of a
concurrent data structure (such as [[serializability]], [[linearizability]], [[sequential consistency]], and
quiescent consistency
sequentially, and map its concurrent executions to
a collection of sequential ones.
data structures must typically (though not always) allow threads to
reach [[consensus (computer science)
of their simultaneous data access and modification requests. To
support such agreement, concurrent data structures are implemented
using special primitive synchronization operations (see [[Synchronization (computer science)#
available on modern [[multiprocessing
that allow multiple threads to reach consensus. This consensus can be
of theory on the design of concurrent data structures (see
bibliographical references).
==Design and
Concurrent data structures are significantly more difficult to design
Line 94 ⟶ 74:
various elements of the multiprocessor architecture all influence performance.
Furthermore, there is a tension between correctness and performance: algorithmic enhancements that seek to improve performance often make it more difficult to design and verify a correct
data structure implementation.<ref>
{{cite conference
| title=More than you ever wanted to know about synchronization: Synchrobench, measuring the impact of the synchronization on concurrent algorithms
| author=Gramoli, V.
| book-title=Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
| pages=1–10
| year=2015
| publisher=ACM
| url=http://sydney.edu.au/engineering/it/~gramoli/doc/pubs/gramoli-synchrobench.pdf
| archive-url=https://web.archive.org/web/20150410030004/http://sydney.edu.au/engineering/it/~gramoli/doc/pubs/gramoli-synchrobench.pdf
| archive-date=10 April 2015
}}</ref>
A key measure for performance is scalability, captured by
effectively the application is
on. On a machine with P processors,
speedup of P when using P processors. Data structures whose
speedup grows with P are called '''scalable'''. The extent to which one can scale the performance of a concurrent data structure is captured by a formula
more refined versions of it such as [[Gustafson's law]].
A key issue with the performance of concurrent data structures is the level of
result of multiple threads concurrently attempting to access the same
locations in memory. This issue is most acute with blocking implementations
in which locks control access to memory. In order to
acquire a lock, a thread must repeatedly attempt to modify that
___location. On a [[Cache coherence
multiprocessor (one in which processors have
local caches that are updated by hardware
consistent with the latest values stored) this results in long
waiting times for each attempt to modify the ___location, and is
Line 116 ⟶ 107:
unsuccessful attempts to acquire the lock.
==.NET==
[[.NET]] have {{Mono|ConcurrentDictionary}},<ref>{{cite web |title=ConcurrentDictionary Class (System.Collections.Concurrent) |url=https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2?view=net-9.0 |website=learn.microsoft.com |access-date=26 November 2024 |language=en-us}}</ref> {{Mono|ConcurrentQueue}}<ref>{{cite web |title=ConcurrentQueue Class (System.Collections.Concurrent) |url=https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentqueue-1?view=net-9.0 |website=learn.microsoft.com |access-date=26 November 2024 |language=en-us}}</ref> and {{Mono|ConcurrentStack}}<ref>{{cite web |title=ConcurrentStack Class (System.Collections.Concurrent) |url=https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentstack-1?view=net-9.0 |website=learn.microsoft.com |access-date=26 November 2024 |language=en-us}}</ref> in the {{Mono|System.Collections.Concurrent}} namespace.
[[File:UML dotnet concurrent.svg|UML class diagram of System.Collections.Concurrent in .NET]]
==References== ▼
==Rust==
[[Rust (programming language)|Rust]] instead wraps data structures in {{Mono|Arc}} and {{Mono|Mutex}}.<ref>{{cite web |title=Shared-State Concurrency - The Rust Programming Language |url=https://doc.rust-lang.org/book/ch16-03-shared-state.html |website=doc.rust-lang.org |access-date=26 November 2024}}</ref>
<syntaxhighlight lang="rust">
let counter = Arc::new(Mutex::new(0));
</syntaxhighlight>
==See also==
* [[Thread safety]]
* [[Java concurrency]] (JSR 166)
* [[Java ConcurrentMap]]
{{reflist}}
==Further reading==
* [[Nancy Lynch]] "Distributed Computing"
* [[Hagit Attiya]] and Jennifer Welch "Distributed Computing: Fundamentals, Simulations And Advanced Topics, 2nd Ed"
* [[Doug Lea]], "Concurrent Programming in Java: Design Principles and Patterns"
* [[Maurice Herlihy]] and [[Nir Shavit]], "The Art of Multiprocessor Programming"
Line 128 ⟶ 135:
==External links==
* [https://web.archive.org/web/20160303215946/http://www.ibm.com/developerworks/aix/library/au-multithreaded_structures1/index.html Multithreaded data structures for parallel computing, Part 1] (Designing concurrent data structures) by Arpan Sen
* [https://web.archive.org/web/20160304000118/http://www.ibm.com/developerworks/aix/library/au-multithreaded_structures2/index.html Multithreaded data structures for parallel computing: Part 2] (Designing concurrent data structures without mutexes) by Arpan Sen
* [https://libcds.sourceforge.net/ libcds] – C++ library of lock-free containers and safe memory reclamation schema
* [https://sites.google.com/site/synchrobench/ Synchrobench] – C/C++ and Java libraries and benchmarks of lock-free, lock-based, TM-based and RCU/COW-based data structures.
{{DEFAULTSORT:Concurrent Data Structure}}
▲[[Category:Data structures]]
▲[[Category:Concurrent computing]]
|