Content deleted Content added
def |
m Open access bot: url-access=subscription updated in citation with #oabot. |
||
(10 intermediate revisions by 4 users not shown) | |||
Line 5:
The parallel computation thesis is not a rigorous formal statement, as it does not clearly define what constitutes an acceptable parallel model. A parallel machine must be sufficiently powerful to emulate the sequential machine in time polynomially related to the sequential space; compare [[Turing machine]], [[non-deterministic Turing machine]], and [[alternating Turing machine]]. N. Blum (1983) introduced a model for which the thesis does not hold.<ref>{{Cite journal|journal=Information Processing Letters|last=Blum|first=Norbert|title=A note on the 'parallel computation thesis'|volume=17|issue=4|pages=203–205|year=1983|doi=10.1016/0020-0190(83)90041-8}}</ref>
However, the model allows <math>2^{2^{O(T(n))}}</math> parallel threads of computation after <math>T(n)</math> steps. (See [[Big O notation]].) Parberry (1986) suggested a more "reasonable" bound would be <math>2^{O(T(n))}</math> or <math>2^{T(n)^{O(1)}}</math>, in defense of the thesis.<ref name=":0">{{Cite journal|doi=10.1145/8312.8317|last=Parberry|first=I.|title=Parallel speedup of sequential machines: a defense of parallel computation thesis|journal=ACM SIGACT News|volume=18|issue=1|pages=54–67|year=1986|doi-access=free}}</ref>
Goldschlager (1982) proposed a model which is sufficiently universal to emulate all "reasonable" parallel models. In this model, the thesis is provably true.<ref>{{Cite journal|doi=10.1145/322344.322353|last=Goldschlager|first=Leslie M.|title=A universal interconnection pattern for parallel computers|journal=[[Journal of the ACM]]|volume=29|issue=3|pages=1073–1086|year=1982|doi-access=free}}</ref>
Chandra and Stockmeyer originally formalized and proved results related to the thesis for deterministic and alternating Turing machines, which is where the thesis originated.<ref>{{Cite journal|doi=10.1145/322234.322243|last1=Chandra|first1=Ashok K.|last2=Kozen|first2=Dexter C.|last3=Stockmeyer|first3=Larry J.|title=Alternation|journal=[[Journal of the ACM]]|volume=28|issue=1|pages=114–133|year=1981|doi-access=free}}</ref>
Line 12:
Given two models of computation, such as Turing machines and PRAM, they would have computational resource usages. For Turing machines, the resources can be tape space, sequential time, number of times the read/write head changes direction, etc. For PRAM, the resources can be parallel time, total number of processors, etc.
The '''parallel computation thesis''' states that, conditional on any <math display="inline">T(n) \ge \log n</math>, the use of tape space in Turing machines is polynomially related to the use of parallel time in PRAM for which the total number of processors is at most exponential in parallel time.
The restriction on "at most exponential" is important, since with a bit more than exponentially many processors, there is a collapse: Any language in NP can be recognized in constant time by a shared-memory machine with <math display="inline">O\left(2^{n^{O(1)}}\right)</math> processors and word size <math display="inline">O\left(T(n)^2\right)</math>.<ref name=":0" />
If the parallel computation thesis is true, then one implication is that "fast" parallel computers (i.e. those that run in polylogarithmic time) recognize exactly the languages in [[PolyL|'''polyL''']].<ref name=":1">{{Cite journal |last1=Parberry |first1=Ian |last2=Schnitger |first2=Georg |date=1988-06-01 |title=Parallel computation with threshold functions |url=https://dx.doi.org/10.1016/0022-0000%2888%2990030-X |journal=Journal of Computer and System Sciences |volume=36 |issue=3 |pages=278–302 |doi=10.1016/0022-0000(88)90030-X |issn=0022-0000}}</ref>
== Evidence ==
It was proven in 1978<ref>{{Cite
Also, for non-deterministic versions,<math display="block">
Line 21 ⟶ 27:
</math>In particular, nondeterministic <math display="inline">O(\log n)</math>-time PRAM = NP and nondeterministic polynomial time PRAM = nondeterministic exponential time.
== Other theses ==
== Sequential computation thesis ==▼
Related to this is the '''sequential computation thesis'''.<ref>{{Cite book |last=Goldschlager |first=Les |title=Computer science: a modern introduction |last2=Lister |first2=Andrew |date=1982 |publisher=Prentice/Hall Internat |isbn=978-0-13-165704-5 |edition=1 |series=Prentice-Hall international series in computer science |___location=Englewood Cliffs, NJ}}</ref>{{Pg|___location=Section 3.2.3}} It states that given any two reasonable definitions A and B, of what it means to have a "sequential computer", their execution times are polynomially related. Concretely, it means that for each sequential computer <math>C_A</math> according to definition A, there is a sequential computer <math>C_B</math> according to definition B, such that the execution time of <math>C_A</math> on any problem is upper bounded by a polynomial of the execution time of <math>C_B</math> on the same problem.▼
=== Extended parallel computation thesis ===
The '''extended parallel computation thesis'''<ref>{{Cite book |last1=Dymond |first1=Patrick W. |last2=Cook |first2=Stephen A. |title=21st Annual Symposium on Foundations of Computer Science (SFCS 1980) |chapter=Hardware complexity and parallel computation |date=October 1980 |chapter-url=https://ieeexplore.ieee.org/document/4567837 |pages=360–372 |doi=10.1109/SFCS.1980.22}}</ref> states that both of these are true:
* Turing machine (head reversal, tape space) and PRAM (parallel time, processor count) are simultaneously polynomially related.
* PRAM parallel time and PRAM processor count are polynomially related.
One implication would be that "small and fast" parallel computers (i.e. those that run in both polylogarithmic time and with polynomially many processors) recognize exactly the languages in '''[[NC (complexity)|NC]]'''.<ref name=":1" />
▲=== Sequential computation thesis ===
▲Related to this is the '''sequential computation thesis'''.<ref>{{Cite book |
It is stronger than the [[Church–Turing thesis]], since it claims not only that the computable problems are the same for all computers, but also that the feasibly computable problems are the same for all computers.
Line 28 ⟶ 43:
== References ==
{{reflist}}
== Further reading ==
* {{Citation |last1=Balcázar |first1=José Luis |title=The Parallel Computation Thesis |date=1990 |work=Structural Complexity II |pages=33–62 |editor-last=Balcázar |editor-first=José Luis |url=https://link.springer.com/chapter/10.1007/978-3-642-75357-2_3 |access-date=2025-05-19 |place=Berlin, Heidelberg |publisher=Springer |language=en |doi=10.1007/978-3-642-75357-2_3 |isbn=978-3-642-75357-2 |last2=Díaz |first2=Josep |last3=Gabarró |first3=Joaquim |editor2-last=Díaz |editor2-first=Josep |editor3-last=Gabarró |editor3-first=Joaquim|url-access=subscription }}
[[Category:Parallel computing]]
|