Content deleted Content added
Fixed link |
m Typo and General fixing, replaced: know as → known as using AWB |
||
Line 3:
In [[computer science]], a '''concurrent data structure''' is a
particular way of storing and organizing [[data]] for access by
multiple computing [[
Historically, such data structures were used on [[uniprocessor]]
Line 99:
on. On a machine with P processors, the '''speedup''' is the ratio of the structures execution time on a single processor to its execution time on T processors. Ideally, we want linear speedup: we would like to achieve a
speedup of P when using P processors. Data structures whose
speedup grows with P are called '''scalable'''. The extent to which one can scale the performance of a concurrent data structure is captured by a formula
more refined versions of it such as [[Gustafson's law]].
Line 114:
exacerbated by the additional memory traffic associated with
unsuccessful attempts to acquire the lock.
==See also==▼
* [[Java concurrency|Java concurrency (JSR 166)]]▼
==References==
Line 123 ⟶ 126:
* [[Maurice Herlihy]] and [[Nir Shavit]], "The Art of Multiprocessor Programming"
* Mattson, Sanders, and Massingil "Patterns for Parallel Programming"
▲==See also==
▲* [[Java concurrency|Java concurrency (JSR 166)]]
==External links==
|