Time hierarchy theorem

This is an old revision of this page, as edited by Dcoetzee (talk | contribs) at 20:50, 26 June 2005 (Statement: and is at least ''n'' (I think we need this assumption)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computational complexity theory, the time hierarchy theorems are important statements that ensure the existence of certain "hard" problems which cannot be solved in a given amount of time. As a consequence, for every time-bounded complexity class, there is a strictly larger time-bounded complexity class, and so the run-time hierarchy of problems does not completely collapse. One theorem deals with deterministic computations and the other with non-deterministic ones. The analogous theorems for space are the space hierarchy theorems.

Both theorems use the notion of a time-constructible function. A function f : NN is time-constructible if there exists a deterministic Turing machine such that for every n in N, if the machine is started with an input of n ones, it will halt after precisely f(n) steps. All polynomials with non-negative integral coefficients are time-constructible, as are exponential functions such as 2n.

Deterministic time hierarchy theorem

Statement

If f(n) is a time-constructible function and is at least n, then there exists a decision problem which cannot be solved in worst-case deterministic time f(n) but can be solved in worst-case deterministic time f(n)2. In other words, the complexity class DTIME(f(n)) is a strict subset of DTIME(f(n)2).

Even more generally, it can be shown that if f(n)log f(n) grows slower asymptotically than g(n), then more problems can be solved in deterministic time g(n) than in time f(n).[1] For example, there are problems solvable in time n2 but not time n, since n log n grows slower asymptotically than n2.

Proof

We include here a proof that DTIME(f(n)) is a strict subset of DTIME(f(2n + 1)3) as it is simpler. See the bottom of this section for information on how to extend the proof to f(n)2.

To prove this, we first define a language as follows:

 

Here, M is a deterministic Turing machine, and x is its input (the initial contents of its tape). [M] denotes an input that encodes the Turing machine M. Let m be the size of the tuple ([M], x).

We know that we can decide membership of Hf by way of a deterministic Turing machine that first calculates f(|x|), then writes out a row of 0s of that length, and then uses this row of 0s as a "clock" or "counter" to simulate M for at most that many steps. At each step, the simulating machine needs to look through the definition of M to decide what the next action would be. It is safe to say that this takes at most f(m)3 operations, so

 

The rest of the proof will show that

 

so that if we substitute 2n + 1 for m, we get the desired result. Let us assume that Hf is in this time complexity class, and we will attempt to reach a contradiction.

If Hf is in this time complexity class, it means we can construct some machine K which, given some machine description [M] and input x, decides whether the tuple ([M], x) is in Hf within  .

Therefore we can use this K to construct another machine, N, which takes a machine description [M] and runs K on the tuple ([M], [M]), and then accepts only if K rejects, and rejects if K accepts. If now n is the length of the input to N, then m (the length of the input to K) is twice n plus some delimiter symbol, so m = 2n + 1. N's running time is thus  .

Now if we feed [N] as input into N itself (which makes n the length of [N]) and ask the question whether N accepts its own description as input, we get:

  • If N accepts [N] (which we know it does in at most f(n) operations), this means that K rejects ([N], [N]), so ([N], [N]) is not in Hf, and thus N does not accept [N] in f(n) steps. Contradiction!
  • If N rejects [N] (which we know it does in at most f(n) operations), this means that K accepts ([N], [N]), so ([N], [N]) is in Hf, and thus N does accept [N] in f(n) steps. Contradiction!

We thus conclude that the machine K does not exist, and so

 

Extension

The reader may have realised that the proof is simpler because we have chosen a simple Turing machine simulation for which we can be certain that

 

It has been shown [2] that a more efficient model of simulation exists which establishes that

 

but since this model of simulation is rather involved, it is not included here.

Non-deterministic time hierarchy theorem

If g(n) is a time-constructible function, and f(n+1) = o(g(n)), then there exists a decision problem which cannot be solved in non-deterministic time f(n) but can be solved in non-deterministic time g(n). In other words, the complexity class NTIME(f(n)) is a strict subset of NTIME(g(n)).

Consequences

The time hierarchy theorems guarantee that the deterministic and non-deterministic version of the exponential hierarchy are genuine hierarchies: in other words PEXPTIME ⊂ 2-EXP ⊂ ..., and NPNEXPTIME ⊂ 2-NEXP ⊂ ...

The theorem also guarantees that there are problems in the P requiring arbitrarily large exponents to solve; in other words, P does not collapse to DTIME(nk) for some fixed k. For example, there are problems solvable in O(n5000) time but not O(n4999) time. This is one argument against considering P to be a practical class of algorithms. This is unfortunate, since if such a collapse did occur, we would know that PPSPACE, since it is a well-known theorem that DTIME(f(n)) is strictly contained in DSPACE(f(n)).

However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of complexity theory: whether P and NP, NP and PSPACE, PSPACE and EXPTIME, or EXPTIME and NEXPTIME are equal or not.