Content deleted Content added
Line 90:
speedup grows with P are called '''scalable'''. The extent to which one can scale the performance of a concurrent data structure is captured by a formula know as [[Amdahl's law]] and
more refined versions of it such as [[Gustafson's law]].
Another key issue with concurrent data structures is '''memory contention''': the overhead in traffic to and from memory as a
result of multiple threads concurrently attempting to access the same
locations in memory. This issue is most acute with blocking implementations
in which locks control access to memory. In order to
acquire a lock, a thread must repeatedly attempt to modify that
___location. On a [[cache-coherent]]
multiprocessor (one in which processors have
local caches that are updated by hardware in order to keep them
consistent with the latest values stored) this results in long
waiting times for each attempt to modify the ___location, and is
exacerbated by the additional memory traffic associated with
unsuccessful attempts to acquire the lock.
==References==
|