Concurrent data structure: Difference between revisions

Content deleted Content added
Fixed link
m Typo and General fixing, replaced: know as → known as using AWB
Line 3:
In [[computer science]], a '''concurrent data structure''' is a
particular way of storing and organizing [[data]] for access by
multiple computing [[Thread_Thread (computer_sciencecomputer science)|threads]] (or [[process (computing)|processes]]) on a [[computer]].
 
Historically, such data structures were used on [[uniprocessor]]
Line 99:
on. On a machine with P processors, the '''speedup''' is the ratio of the structures execution time on a single processor to its execution time on T processors. Ideally, we want linear speedup: we would like to achieve a
speedup of P when using P processors. Data structures whose
speedup grows with P are called '''scalable'''. The extent to which one can scale the performance of a concurrent data structure is captured by a formula knowknown as [[Amdahl's law]] and
more refined versions of it such as [[Gustafson's law]].
 
Line 114:
exacerbated by the additional memory traffic associated with
unsuccessful attempts to acquire the lock.
 
==See also==
* [[Java concurrency|Java concurrency (JSR 166)]]
 
==References==
Line 123 ⟶ 126:
* [[Maurice Herlihy]] and [[Nir Shavit]], "The Art of Multiprocessor Programming"
* Mattson, Sanders, and Massingil "Patterns for Parallel Programming"
 
==See also==
* [[Java concurrency|Java concurrency (JSR 166)]]
 
==External links==