Parallel computation thesis: Difference between revisions

Content deleted Content added
Line 18:
The restriction on "at most exponential" is important, since with a bit more than exponentially many processors, there is a collapse: Any language in NP can be recognized in constant time by a shared-memory machine with <math display="inline">O\left(2^{n^{O(1)}}\right)</math> processors and word size <math display="inline">O\left(T(n)^2\right)</math>.<ref name=":0" />
 
If the parallel computation thesis is true, then one implication is that "fast" parallel computers (i.e. those that run in polylogarithmic time) recognize exactly the languages in [[PolyL|'''polyL''']].<ref name=":1">{{Cite journal |last=Parberry |first=Ian |last2=Schnitger |first2=Georg |date=1988-06-01 |title=Parallel computation with threshold functions |url=https://www.sciencedirect.com/science/article/pii/002200008890030X |journal=Journal of Computer and System Sciences |volume=36 |issue=3 |pages=278–302 |doi=10.1016/0022-0000(88)90030-X |issn=0022-0000}}</ref>
 
== Evidence ==
Line 34:
* Turing machine (head reversal, tape space) and PRAM (parallel time, processor count) are simultaneously polynomially related.
* PRAM parallel time and PRAM processor count are polynomially related.
One implication would be that "small and fast" parallel computers (i.e. those that run in both polylogarithmic time and with polynomially many processors) recognize exactly the languages in '''[[NC (complexity)|NC]]'''.<ref name=":1" />
 
=== Sequential computation thesis ===