Content deleted Content added
PSPACE |
def |
||
Line 6:
The parallel computation thesis is not a rigorous formal statement, as it does not clearly define what constitutes an acceptable parallel model. A parallel machine must be sufficiently powerful to emulate the sequential machine in time polynomially related to the sequential space; compare [[Turing machine]], [[non-deterministic Turing machine]], and [[alternating Turing machine]]. N. Blum (1983) introduced a model for which the thesis does not hold.<ref>{{Cite journal|journal=Information Processing Letters|last=Blum|first=Norbert|title=A note on the 'parallel computation thesis'|volume=17|issue=4|pages=203–205|year=1983|doi=10.1016/0020-0190(83)90041-8}}</ref>
However, the model allows <math>2^{2^{O(T(n))}}</math> parallel threads of computation after <math>T(n)</math> steps. (See [[Big O notation]].) Parberry (1986) suggested a more "reasonable" bound would be <math>2^{O(T(n))}</math> or <math>2^{T(n)^{O(1)}}</math>, in defense of the thesis.<ref>{{Cite journal|doi=10.1145/8312.8317|last=Parberry|first=I.|title=Parallel speedup of sequential machines: a defense of parallel computation thesis|journal=ACM SIGACT News|volume=18|issue=1|pages=54–67|year=1986|doi-access=free}}</ref>
Goldschlager (1982) proposed a model which is sufficiently universal to emulate all "reasonable" parallel models
Chandra and Stockmeyer originally formalized and proved results related to the thesis for deterministic and alternating Turing machines, which is where the thesis originated.<ref>{{Cite journal|doi=10.1145/322234.322243|last1=Chandra|first1=Ashok K.|last2=Kozen|first2=Dexter C.|last3=Stockmeyer|first3=Larry J.|title=Alternation|journal=[[Journal of the ACM]]|volume=28|issue=1|pages=114–133|year=1981|doi-access=free}}</ref>
== Definition ==
For example, it was proven in 1978<ref>{{Cite journal |last=Fortune |first=Steven |last2=Wyllie |first2=James |date=1978 |title=Parallelism in random access machines |url=https://doi.org/10.1145/800133.804339 |journal=Proceedings of the tenth annual ACM symposium on Theory of computing - STOC '78 |___location=New York, New York, USA |publisher=ACM Press |pages=114–118 |doi=10.1145/800133.804339}}</ref> that for any <math display="inline">T(n) \ge \log n</math>, and with the restriction that the number of processors of the PRAM is no more than exponential in parallel running time, we have<math display="block"> \bigcup_{k=1}^{\infty} T(n)^k \text{-time PRAM} = \bigcup_{k=1}^{\infty} T(n)^k \text{-space} </math>In particular, <math display="inline">\bigcup_k \log^k n \text{ PRAM} = \bigcup_k \log^k n \text{ SPACE}</math>, and polynomial-time '''PRAM''' = '''PSPACE'''. Note that the exponential amount of processors is likely required. Specifically, suppose that only a polynomial number of processors are required for some [[PSPACE-complete|'''PSPACE'''-complete]] problem, then it would show that '''PSPACE''' = '''P''', a major unresolved hypothesis that is expected to be false. ▼
Given two models of computation, such as Turing machines and PRAM, they would have computational resource usages. For Turing machines, the resources can be tape space, sequential time, number of times the read/write head changes direction, etc. For PRAM, the resources can be parallel time, total number of processors, etc.
Given a function <math>T(n)</math>, saying that the use of one resource R in one model is '''polynomially related''' to the use of another resource R' in another model means the following. Given a problem that can be solved with some computation according to the first model, consuming only <math>T(n)^k</math> amount of resource R for some <math>k > 0</math>, there exists another computation according to the second model, consuming only <math>T(n)^{k'}</math> of resource R' for some <math>k' > 0</math>. And ''vice versa''.
== Evidence ==
▲
Also, for non-deterministic versions,<math display="block">
Line 16 ⟶ 22:
== Sequential computation thesis ==
Related to this is the '''sequential computation thesis'''.<ref>{{Cite book |last=Goldschlager |first=Les |title=Computer science: a modern introduction |last2=Lister |first2=Andrew |date=1982 |publisher=Prentice/Hall Internat |isbn=978-0-13-165704-5 |edition=1 |series=Prentice-Hall international series in computer science |___location=Englewood Cliffs, NJ}}</ref>{{Pg|___location=Section 3.2.3}} It states that given any two reasonable definitions A and B, of what it means to have a "sequential computer", their execution times are polynomially related. Concretely, it means that for each sequential computer <math>C_A</math> according to definition A, there is a sequential computer <math>C_B</math> according to definition B, such that the execution time of <math>C_A</math> on any problem is upper bounded by a polynomial of the execution time of <math>C_B</math> on the same problem.
It is stronger than the [[Church–Turing thesis]], since it claims not only that the computable problems are the same for all computers, but also that the feasibly computable problems are the same for all computers.
|