Content deleted Content added
→top: link DTIME, and briefly explain it, right after 1st use |
→Proof: Replaced period with a colon. |
||
(27 intermediate revisions by 15 users not shown) | |||
Line 1:
{{short description|Given more time, a Turing machine can solve more problems}}
In [[computational complexity theory]], the '''time hierarchy theorems''' are important statements about time-bounded computation on [[Turing machine]]s. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with ''n''<sup>2</sup> time but not ''n'' time, where ''n'' is the input length.
The time hierarchy theorem for [[Turing machine|deterministic multi-tape Turing machines]] was first proven by [[Richard E. Stearns]] and [[Juris Hartmanis]] in 1965.<ref>{{Cite journal
Line 16:
| jstor = 1994208| doi-access = free
}}
</ref> It was improved a year later when F. C. Hennie and Richard E. Stearns improved the efficiency of the [[Universal Turing machine#Efficiency|
| last1 = Hennie
| first1 = F. C.
Line 32:
| issn = 0004-5411
| doi = 10.1145/321356.321362| s2cid = 2347143
| doi-access= free
}}</ref> Consequent to the theorem, for every deterministic time-bounded [[complexity class]], there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all [[constructible function|time-constructible function]]s ''f''(''n''),
:<math>\mathsf{DTIME}\left(o\left(
where [[DTIME]](''f''(''n'')) denotes the complexity class of [[decision problem]]s solvable in time [[big O notation|O]](''f''(''n'')). The left-hand class involves [[little o]] notation, referring to the set of decision problems solvable in asymptotically '''less''' than ''f''(''n'') time.
In particular, this shows that <math>\mathsf{DTIME}(n^a) \subsetneq \mathsf{DTIME}(n^b)</math> if and only if <math>a < b</math>, so we have an infinite time hierarchy.
The time hierarchy theorem for [[nondeterministic Turing machine]]s was originally proven by [[Stephen Cook]] in 1972.<ref>{{cite conference
Line 67 ⟶ 70:
| issn = 0004-5411
| doi = 10.1145/322047.322061| s2cid = 13561149
| doi-access= free
}}</ref> Finally in 1983, Stanislav Žák achieved the same result with the simple proof taught today.<ref>{{cite journal
| first1 = Stanislav
Line 93 ⟶ 97:
===Statement===
<blockquote>'''Time Hierarchy Theorem.''' If ''f''(''n'') is a time-constructible function, then there exists a [[decision problem]] which cannot be solved in worst-case deterministic time ''o''(''f''(''n'')) but can be solved in worst-case deterministic time ''O''(''f''(''n'')
:<math>\mathsf{DTIME}(o(f(n))) \subsetneq \mathsf{DTIME}\left (f(n)
Equivalently, if <math>f, g</math> are time-constructable, and <math>f(n) \ln f(n) = o(g(n))</math>, then
'''Note 1.''' ''f''(''n'') is at least ''n'', since smaller functions are never time-constructible.<br>
'''Example.''' <math>\mathsf{DTIME}(n) \subsetneq \mathsf{DTIME} (n (\ln n)^2) </math>.
▲:<math>\mathsf{DTIME}\left(o\left(\frac{f(n)}{\log f(n)}\right)\right)\subsetneq \mathsf{DTIME}\left (f(n) \right).</math>
===Proof===
We include here a proof of a weaker result, namely that '''DTIME'''(''f''(''n'')) is a strict subset of '''DTIME'''(''f''(2''n'' + 1)<sup>3</sup>), as it is simpler but illustrates the proof idea. See the bottom of this section for information on how to extend the proof to ''f''(''n'')
To prove this, we first define the language of the encodings of machines and their inputs which cause them to halt within ''f''(|''x''|) steps:
: <math> H_f = \left\{ ([M], x)\ |\ M \ \text{accepts}\ x \ \text{in}\ f(|x|) \ \text{steps} \right\}. </math>
Line 114 ⟶ 118:
Here, ''M'' is a deterministic Turing machine, and ''x'' is its input (the initial contents of its tape). [''M''] denotes an input that encodes the Turing machine ''M''. Let ''m'' be the size of the tuple ([''M''], ''x'').
We know that we can decide membership of ''H<sub>f</sub>'' by way of a deterministic Turing machine ''R'', that simulates ''M'' for ''f''(''x'') steps by first calculating ''f''(|''x''|) and then writing out a row of 0s of that length, and then using this row of 0s as a "clock" or "counter" to simulate ''M'' for at most that many steps. At each step, the simulating machine needs to look through the definition of ''M'' to decide what the next action would be. It is safe to say that this takes at most ''f''(''m'')<sup>3</sup> operations (since it is known that a simulation of a machine of time complexity ''T''(''n'') for can be achieved in time
: <math> H_f \in \mathsf{TIME}\left(f(m)^3\right). </math>
Line 128 ⟶ 132:
:<math>\mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right). </math>
We use this ''K'' to construct another machine, ''N'', which takes a machine description [''M''] and runs ''K'' on the tuple ([''M''], [''M'']), ie. M is simulated on its own code by ''K'', and then ''N'' accepts if ''K'' rejects, and rejects if ''K'' accepts.
If ''n'' is the length of the input to ''N'', then ''m'' (the length of the input to ''K'') is twice ''n'' plus some delimiter symbol, so ''m'' = 2''n'' + 1. ''N''
:<math> \mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right) = \mathsf{TIME}\left(f\left( \left\lfloor \frac{2n+1}{2} \right\rfloor \right)\right) = \mathsf{TIME}\left(f(n)\right). </math>
Now if we feed [''N''] as input into ''N'
* If ''N'' accepts' [''
* If
We thus conclude that the machine ''K'' does not exist, and so
Line 143 ⟶ 148:
===Extension===
The reader may have realised that the proof
: <math> H_f \in \mathsf{TIME}(f(m)^3). </math>
It
: <math> H_f \in \mathsf{TIME}(f(m) \log f(m)) </math>.
==Non-deterministic time hierarchy theorem==
Line 165 ⟶ 166:
However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of [[computational complexity theory]]: whether [[P = NP problem|'''P''' and '''NP''']], '''NP''' and '''[[PSPACE]]''', '''PSPACE''' and '''EXPTIME''', or '''EXPTIME''' and '''NEXPTIME''' are equal or not.
==Sharper hierarchy theorems==
== See also ==▼
The gap of approximately <math>\log f(n)</math> between the lower and upper time bound in the hierarchy theorem can be traced to the efficiency of the device used in the proof, namely a universal program that maintains a step-count. This can be done more efficiently on certain computational models. The sharpest results, presented below, have been proved for:
* The unit-cost [[random-access machine]]<ref>{{cite journal |last1=Sudborough |first1=Ivan H. |last2=Zalcberg |first2=A. |title=On Families of Languages Defined by Time-Bounded Random Access Machines |journal=SIAM Journal on Computing |date=1976 |volume=5 |issue=2 |pages=217–230 |doi=10.1137/0205018}}</ref>
* A [[programming language]] model whose programs operate on a binary tree that is always accessed via its root. This model, introduced by [[Neil D. Jones]]<ref>{{cite book |last1=Jones |first1=Neil D. |title=Proceedings of the twenty-fifth annual ACM symposium on Theory of computing - STOC '93 |chapter=Constant time factors ''do'' matter |date=1993 |pages=602–611 |doi=10.1145/167088.167244|isbn=0-89791-591-7 |s2cid=7527905 }}</ref> is stronger than a deterministic Turing machine but weaker than a random-access machine.
For these models, the theorem has the following form:
<blockquote>If ''f''(''n'') is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time ''f''(''n'') but can be solved in worst-case time ''af''(''n'') for some constant ''a'' (dependent on ''f'').</blockquote>
Thus, a constant-factor increase in the time bound allows for solving more problems, in contrast with the situation for Turing machines (see [[Linear speedup theorem]]). Moreover, Ben-Amram proved<ref>{{cite journal |last1=Ben-Amram |first1=Amir M. |title=Tighter constant-factor time hierarchies |journal=Information Processing Letters |date=2003 |volume=87 |issue=1 |pages=39–44|doi=10.1016/S0020-0190(03)00253-9 }}</ref> that, in the above models, for ''f'' of polynomial growth rate (but more than linear), it is the case that for all <math>\varepsilon > 0</math>, there exists a decision problem which cannot be solved in worst-case deterministic time ''f''(''n'') but can be solved in worst-case time <math>(1+\varepsilon)f(n)</math>.
* [[Space hierarchy theorem]]
==References==
{{Reflist}}
==Further reading==
* {{Cite book | author = Michael Sipser | year = 1997 | title = Introduction to the Theory of Computation | publisher = PWS Publishing | isbn = 0-534-94728-X | url-access = registration | url = https://archive.org/details/introductiontoth00sips | author-link = Michael Sipser }} Pages 310–313 of section 9.1: Hierarchy theorems.
* {{Cite book|author = Christos Papadimitriou | year = 1993 | title = Computational Complexity | publisher = Addison Wesley | edition = 1st | isbn = 0-201-53082-1| author-link = Christos Papadimitriou }} Section 7.2: The Hierarchy Theorem, pp. 143–146.
|