Analysis of parallel algorithms: Difference between revisions

Content deleted Content added
new stub
 
m Overview: math formatting
Line 11:
 
* ''Work law''. The cost is always at least the work: {{math|''pT<sub>p</sub>'' ≥ ''T''<sub>1</sub>}}. This follows from the fact {{mvar|p}} processors can perform at most {{mvar|p}} operations in parallel.<ref name="clrs"/><ref name="casanova"/>
* ''Span law''. A finite number {{mvar|p}} of processors cannot outperform an infinite number, so that {{math|''T<sub>p</sub>'' ≥ ''T''<sub>∞</sub>}}.<ref name="clrs"/>
 
Using these definitions and laws, the following measures of performance can be given:
 
* ''[[Speedup]]'' is the gain in speed made by parallel execution compared to sequential execution: {{math|''S<sub>p</sub>'' {{=}} ''T<sub>p</sub>'' / ''T''<sub>1</sub>}}. When the speedup is {{math|Ω(''n'')}} for input size {{mvar|n}} (using [[big O notation]]), the speedup is linear, which is optimal in the PRAM model ([[Speedup#Super-linear speedup|super-linear speedup]] can occur in practice due to [[memory hierarchy]] effects). The situation {{math|''T<sub>p</sub>'' / ''T''<sub>1</sub> {{=}} p}} is called perfect linear speedup.<ref name="clrs"/> An algorithm that exhibits linear speedup is said to be [[scalability|scalable]].<ref name="casanova"/>
* ''Efficiency'' is the speedup per processor, {{math|''S<sub>p</sub>'' / ''p''}}.<ref name="casanova"/>
 
==References==