Content deleted Content added
m Open access bot: doi added to citation with #oabot. |
→Proof: Replaced period with a colon. |
||
(43 intermediate revisions by 22 users not shown) | |||
Line 1:
{{short description|Given more time, a Turing machine can solve more problems}}
In [[computational complexity theory]], the '''time hierarchy theorems''' are important statements about time-bounded computation on [[Turing machine]]s. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with ''n''<sup>2</sup> time but not ''n'' time, where ''n'' is the input length. The time hierarchy theorem for [[Turing machine|deterministic multi-tape Turing machines]] was first proven by [[Richard E. Stearns]] and [[Juris Hartmanis]] in 1965.<ref>{{Cite journal
Line 15 ⟶ 16:
| jstor = 1994208| doi-access = free
}}
</ref> It was improved a year later when F. C. Hennie and Richard E. Stearns improved the efficiency of the [[Universal Turing machine#Efficiency|
| last1 = Hennie
| first1 = F. C.
| last2 = Stearns
| first2 = R. E.
|
|date=October 1966
| title = Two-Tape Simulation of Multitape Turing Machines
Line 30 ⟶ 31:
| publisher = ACM
| issn = 0004-5411
| doi = 10.1145/321356.321362| s2cid = 2347143
| doi = 10.1145/321356.321362}}</ref> Consequent to the theorem, for every deterministic time-bounded [[complexity class]], there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all [[constructible function|time-constructible function]]s ''f''(''n''),▼
| doi-access= free
:<math>\mathsf{DTIME}\left(o\left(\frac{f(n)}{\log f(n)}\right)\right) \subsetneq \mathsf{DTIME}(f(n))</math>.▼
▲
▲:<math>\mathsf{DTIME}\left(o\left(
where [[DTIME]](''f''(''n'')) denotes the complexity class of [[decision problem]]s solvable in time [[big O notation|O]](''f''(''n'')). The left-hand class involves [[little o]] notation, referring to the set of decision problems solvable in asymptotically '''less''' than ''f''(''n'') time.
In particular, this shows that <math>\mathsf{DTIME}(n^a) \subsetneq \mathsf{DTIME}(n^b)</math> if and only if <math>a < b</math>, so we have an infinite time hierarchy.
The time hierarchy theorem for [[nondeterministic Turing machine]]s was originally proven by [[Stephen Cook]] in 1972.<ref>{{cite conference
Line 37 ⟶ 43:
| first = Stephen A.
| last = Cook
|
| year = 1972
| conference = STOC '72
|
| publisher = ACM
| ___location = Denver, Colorado, United States
| pages = 187–192
| doi = 10.1145/800152.804913| doi-access= free
}}</ref> It was improved to its current form via a complex proof by Joel Seiferas, [[Michael J. Fischer|Michael Fischer]], and [[Albert R. Meyer|Albert Meyer]] in 1978.<ref>{{cite journal | last1 = Seiferas
| first1 = Joel I.
| last2 = Fischer
| first2 = Michael J.
|
| last3 = Meyer
| first3 = Albert R.
|
|date=January 1978
| title = Separating Nondeterministic Time Complexity Classes
Line 62 ⟶ 69:
| publisher = ACM
| issn = 0004-5411
| doi = 10.1145/322047.322061| s2cid = 13561149
| doi-access= free }}</ref> Finally in 1983, Stanislav Žák achieved the same result with the simple proof taught today.<ref>{{cite journal | first1 = Stanislav
| last1 = Žák
Line 72 ⟶ 81:
| pages = 327–333
| publisher = Elsevier Science B.V.
| doi = 10.1016/0304-3975(83)90015-4| doi-access= free
}}</ref> The time hierarchy theorem for nondeterministic Turing machines states that if ''g''(''n'') is a time-constructible function, and ''f''(''n''+1) = [[Little O notation|o]](''g''(''n'')), then :<math>\mathsf{NTIME}(f(n)) \subsetneq \mathsf{NTIME}(g(n))</math>.
The analogous theorems for space are the [[space hierarchy theorem]]s. A similar theorem is not known for time-bounded probabilistic complexity classes, unless the class also has one bit of [[advice (complexity)|advice]].<ref>{{Cite book|doi=10.1109/FOCS.2004.33|title=45th Annual IEEE Symposium on Foundations of Computer Science|year=2004|author=Fortnow, L.|pages=316|last2=Santhanam|first2=R.|chapter=Hierarchy Theorems for Probabilistic Polynomial Time|isbn=0-7695-2228-9|s2cid=5555450}}</ref>
==Background==
Line 87 ⟶ 97:
===Statement===
<blockquote>'''Time Hierarchy Theorem.''' If ''f''(''n'') is a time-constructible function, then there exists a [[decision problem]] which cannot be solved in worst-case deterministic time ''o''(''f''(''n'')) but can be solved in worst-case deterministic time ''O''(''f''(''n'')
:<math>\mathsf{DTIME}(o(f(n))) \subsetneq \mathsf{DTIME}\left (f(n)
Equivalently, if <math>f, g</math> are time-constructable, and <math>f(n) \ln f(n) = o(g(n))</math>, then
'''Note 1.''' ''f''(''n'') is at least ''n'', since smaller functions are never time-constructible.<br>
'''Example.''' <math>\mathsf{DTIME}(n) \subsetneq \mathsf{DTIME} (n (\ln n)^2) </math>.
▲:<math>\mathsf{DTIME}\left(o\left(\frac{f(n)}{\log f(n)}\right)\right)\subsetneq \mathsf{DTIME}\left (f(n) \right).</math>
===Proof===
We include here a proof of a weaker result, namely that '''DTIME'''(''f''(''n'')) is a strict subset of '''DTIME'''(''f''(2''n'' + 1)<sup>3</sup>), as it is simpler but illustrates the proof idea. See the bottom of this section for information on how to extend the proof to ''f''(''n'')
To prove this, we first define
: <math> H_f = \left\{ ([M], x)\ |\ M \ \text{accepts}\ x \ \text{in}\ f(|x|) \ \text{steps} \right\}. </math>
Notice here that this is a time-class. It is the set of pairs of machines and inputs to those machines (''M'',''x'') so that the machine ''M'' accepts within ''f''(|''x''|) steps.
Here, ''M'' is a deterministic Turing machine, and ''x'' is its input (the initial contents of its tape). [''M''] denotes an input that encodes the Turing machine ''M''. Let ''m'' be the size of the tuple ([''M''], ''x'').
We know that we can decide membership of ''H<sub>f</sub>'' by way of a deterministic Turing machine ''R'', that simulates ''M'' for ''f''(''x'') steps by first
: <math> H_f \in \mathsf{TIME}\left(f(m)^3\right). </math>
Line 114 ⟶ 126:
: <math> H_f \notin \mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right) </math>
so that if we substitute 2''n'' + 1 for ''m'', we get the desired result. Let us assume that ''H<sub>f</sub>'' is in this time complexity class, and we will
If ''H<sub>f</sub>'' is in this time complexity class,
:<math>\mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right). </math>
If :<math> \mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right) = \mathsf{TIME}\left(f\left( \left\lfloor \frac{2n+1}{2} \right\rfloor \right)\right) = \mathsf{TIME}\left(f(n)\right). </math>
Now if we feed [''N''] as input into ''N'
* If ''N'' '''accepts''' [''N''] (which we know it does in at most ''f''(''n'') operations), this means that ''K'' '''rejects''' ([''N''], [''N'']), so ([''N''], [''N'']) is not in ''H<sub>f</sub>'', and thus ''N'' does not accept [''N''] in ''f''(''n'') steps. Contradiction!▼
* If ''N'' accepts' [''
▲* If
We thus conclude that the machine ''K'' does not exist, and so
Line 132 ⟶ 148:
===Extension===
The reader may have realised that the proof
: <math> H_f \in \mathsf{TIME}(f(m)^3). </math>
It
: <math> H_f \in \mathsf{TIME}(f(m) \log f(m)) </math>.
==Non-deterministic time hierarchy theorem==
Line 152 ⟶ 166:
However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of [[computational complexity theory]]: whether [[P = NP problem|'''P''' and '''NP''']], '''NP''' and '''[[PSPACE]]''', '''PSPACE''' and '''EXPTIME''', or '''EXPTIME''' and '''NEXPTIME''' are equal or not.
==Sharper hierarchy theorems==
== See also ==▼
The gap of approximately <math>\log f(n)</math> between the lower and upper time bound in the hierarchy theorem can be traced to the efficiency of the device used in the proof, namely a universal program that maintains a step-count. This can be done more efficiently on certain computational models. The sharpest results, presented below, have been proved for:
* The unit-cost [[random-access machine]]<ref>{{cite journal |last1=Sudborough |first1=Ivan H. |last2=Zalcberg |first2=A. |title=On Families of Languages Defined by Time-Bounded Random Access Machines |journal=SIAM Journal on Computing |date=1976 |volume=5 |issue=2 |pages=217–230 |doi=10.1137/0205018}}</ref>
* A [[programming language]] model whose programs operate on a binary tree that is always accessed via its root. This model, introduced by [[Neil D. Jones]]<ref>{{cite book |last1=Jones |first1=Neil D. |title=Proceedings of the twenty-fifth annual ACM symposium on Theory of computing - STOC '93 |chapter=Constant time factors ''do'' matter |date=1993 |pages=602–611 |doi=10.1145/167088.167244|isbn=0-89791-591-7 |s2cid=7527905 }}</ref> is stronger than a deterministic Turing machine but weaker than a random-access machine.
For these models, the theorem has the following form:
<blockquote>If ''f''(''n'') is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time ''f''(''n'') but can be solved in worst-case time ''af''(''n'') for some constant ''a'' (dependent on ''f'').</blockquote>
Thus, a constant-factor increase in the time bound allows for solving more problems, in contrast with the situation for Turing machines (see [[Linear speedup theorem]]). Moreover, Ben-Amram proved<ref>{{cite journal |last1=Ben-Amram |first1=Amir M. |title=Tighter constant-factor time hierarchies |journal=Information Processing Letters |date=2003 |volume=87 |issue=1 |pages=39–44|doi=10.1016/S0020-0190(03)00253-9 }}</ref> that, in the above models, for ''f'' of polynomial growth rate (but more than linear), it is the case that for all <math>\varepsilon > 0</math>, there exists a decision problem which cannot be solved in worst-case deterministic time ''f''(''n'') but can be solved in worst-case time <math>(1+\varepsilon)f(n)</math>.
* [[Space hierarchy theorem]]
==References==
{{Reflist}}
==Further reading==
* {{Cite book | author = Michael Sipser | year = 1997 | title = Introduction to the Theory of Computation | publisher = PWS Publishing | isbn = 0-534-94728-X | url-access = registration | url = https://archive.org/details/introductiontoth00sips | author-link = Michael Sipser }} Pages 310–313 of section 9.1: Hierarchy theorems.
* {{Cite book|author = Christos Papadimitriou | year = 1993 | title = Computational Complexity | publisher = Addison Wesley | edition = 1st | isbn = 0-201-53082-1| author-link = Christos Papadimitriou }} Section 7.2: The Hierarchy Theorem, pp. 143–146.
|