Concurrent data structure: Difference between revisions

Content deleted Content added
m Basic principles: links to blocking/non-blocking
m Design and Implementation: remove unjustified bold
Line 96:
A key measure for performance is scalability, captured by the [[speedup]] of the implementation. Speedup is a measure of how
effectively the application is utilizing the machine it is running
on. On a machine with P processors, the '''speedup''' is the ratio of the structures execution time on a single processor to its execution time on T processors. Ideally, we want linear speedup: we would like to achieve a
speedup of P when using P processors. Data structures whose
speedup grows with P are called '''scalable'''. The extent to which one can scale the performance of a concurrent data structure is captured by a formula known as [[Amdahl's law]] and
more refined versions of it such as [[Gustafson's law]].
 
A key issue with the performance of concurrent data structures is the level of '''memory contention''': the overhead in traffic to and from memory as a
result of multiple threads concurrently attempting to access the same
locations in memory. This issue is most acute with blocking implementations