The restriction on "at most exponential" is important, since with a bit more than exponentially many processors, there is a collapse: Any language in NP can be recognized in constant time by a shared-memory machine with <math display="inline">O\left(2^{n^{O(1)}}\right)</math> processors and word size <math display="inline">O\left(T(n)^2\right)</math>.<ref name=":0" />
If the parallel computation thesis is true, then one implication is that "fast" parallel computers (i.e. those that run in polylogarithmic time) recognize exactly the languages in [[PolyL|'''polyL''']].<ref name=":1">{{Cite journal |lastlast1=Parberry |firstfirst1=Ian |last2=Schnitger |first2=Georg |date=1988-06-01 |title=Parallel computation with threshold functions |url=https://wwwdx.sciencedirectdoi.comorg/science10.1016/article/pii/002200008890030X0022-0000%2888%2990030-X |journal=Journal of Computer and System Sciences |volume=36 |issue=3 |pages=278–302 |doi=10.1016/0022-0000(88)90030-X |issn=0022-0000}}</ref>
== Evidence ==
It was proven in 1978<ref>{{Cite journalbook |lastlast1=Fortune |firstfirst1=Steven |last2=Wyllie |first2=James |date=1978 |titlechapter=Parallelism in random access machines |urldate=https://doi.org/10.1145/800133.8043391978 |journaltitle=Proceedings of the tenth annual ACM symposium on Theory of computing - STOC '78 |chapter-url=https://doi.org/10.1145/800133.804339 |___location=New York, New York, USA |publisher=ACM Press |pages=114–118 |doi=10.1145/800133.804339}}</ref> that for any <math display="inline">T(n) \ge \log n</math>, and with the restriction that the number of processors of the PRAM is no more than exponential in parallel running time, we have<math display="block"> \bigcup_{k=1}^{\infty} T(n)^k \text{-time PRAM} = \bigcup_{k=1}^{\infty} T(n)^k \text{-space} </math>In particular, <math display="inline">\bigcup_k \log^k n \text{ PRAM} = \bigcup_k \log^k n \text{ SPACE}</math>, and polynomial-time '''PRAM''' = '''PSPACE'''. Note that the exponential amount of processors is likely required. Specifically, suppose that only a polynomial number of processors are required for some [[PSPACE-complete|'''PSPACE'''-complete]] problem, then it would show that '''PSPACE''' = '''P''', a major unresolved hypothesis that is expected to be false.
Also, for non-deterministic versions,<math display="block">
=== Extended parallel computation thesis ===
The '''extended parallel computation thesis'''<ref>{{Cite journalbook |lastlast1=Dymond |firstfirst1=Patrick W. |last2=Cook |first2=Stephen A. |datetitle=21st Annual Symposium on Foundations of Computer Science (SFCS 1980-10) |titlechapter=Hardware complexity and parallel computation |date=October 1980 |chapter-url=https://ieeexplore.ieee.org/document/4567837/ |journal=21st Annual Symposium on Foundations of Computer Science (sfcs 1980) |pages=360–372 |doi=10.1109/SFCS.1980.22}}</ref> states that both of these are true:
* Turing machine (head reversal, tape space) and PRAM (parallel time, processor count) are simultaneously polynomially related.
=== Sequential computation thesis ===
Related to this is the '''sequential computation thesis'''.<ref>{{Cite book |lastlast1=Goldschlager |firstfirst1=Les |title=Computer science: a modern introduction |last2=Lister |first2=Andrew |date=1982 |publisher=Prentice/Hall Internat |isbn=978-0-13-165704-5 |edition=1 |series=Prentice-Hall international series in computer science |___location=Englewood Cliffs, NJ}}</ref>{{Pg|___location=Section 3.2.3}} It states that given any two reasonable definitions A and B, of what it means to have a "sequential computer", their execution times are polynomially related. Concretely, it means that for each sequential computer <math>C_A</math> according to definition A, there is a sequential computer <math>C_B</math> according to definition B, such that the execution time of <math>C_A</math> on any problem is upper bounded by a polynomial of the execution time of <math>C_B</math> on the same problem.
It is stronger than the [[Church–Turing thesis]], since it claims not only that the computable problems are the same for all computers, but also that the feasibly computable problems are the same for all computers.
== References ==
{{reflist}}
== Further reading ==
* {{Citation |last1=Balcázar |first1=José Luis |title=The Parallel Computation Thesis |date=1990 |work=Structural Complexity II |pages=33–62 |editor-last=Balcázar |editor-first=José Luis |url=https://link.springer.com/chapter/10.1007/978-3-642-75357-2_3 |access-date=2025-05-19 |place=Berlin, Heidelberg |publisher=Springer |language=en |doi=10.1007/978-3-642-75357-2_3 |isbn=978-3-642-75357-2 |last2=Díaz |first2=Josep |last3=Gabarró |first3=Joaquim |editor2-last=Díaz |editor2-first=Josep |editor3-last=Gabarró |editor3-first=Joaquim|url-access=subscription }}
[[Category:Parallel computing]]
|