Parallel computation thesis: Difference between revisions

Content deleted Content added
copy-edits, more precise references
Monkbot (talk | contribs)
m Task 18 (cosmetic): eval 5 templates: hyphenate params (1×);
Line 1:
In [[computational complexity theory]], the '''parallel computation thesis''' is a [[hypothesis]] which states that the ''time'' used by a (reasonable) parallel machine is polynomially related to the ''space'' used by a sequential machine. The parallel computation thesis was set forth by [[Ashok K. Chandra|Chandra]] and [[Larry Stockmeyer|Stockmeyer]] in 1976.<ref name=alternation>{{Cite conference|last1=Chandra|first1=Ashok K.|last2=Stockmeyer|first2=Larry J.|title=Alternation|booktitlebook-title=FOCS'76: Proceedings of the 17th Annual Symposium on Foundations of Computer Science|pages=98–108|doi=10.1109/SFCS.1976.4|year=1976}}</ref>
 
In other words, for a [[computational model]] which allows computations to branch and run in parallel without bound, a [[formal language]] which is [[decidable language|decidable]] under the model using no more than <math>t(n)</math> steps for inputs of length ''n'' is decidable by a non-branching machine using no more than <math>t(n)^k</math> units of storage for some constant ''k''. Similarly, if a machine in the unbranching model decides a language using no more than <math>s(n)</math> storage, a machine in the parallel model can decide the language in no more than <math>s(n)^k</math> steps for some constant ''k''.