Content deleted Content added
Miracle173 (talk | contribs) added paragraph "Input size" |
Miracle173 (talk | contribs) |
||
Line 532:
* "Measuring the size of the instance "in bits" is a misleading and unfounded generalization. The same paragraph goes on to give an example where the size is measured on the number of vertices (not the number of bits needed to represent the number of vertices)."
The original text was brought to the attention of the user by me in a discussion of a [https://codereview.stackexchange.com/a/194617/12462
*
and changed this wiki text as described above.
The reason given for the change is nonsense.
James Aspnes says in [http://www.cs.yale.edu/homes/aspnes/classes/468/notes.pdf Notes on Computational Complexity Theory, CPSC 468/568: Spring 2017, page 7
* The time complexity of an execution is the number of steps until the machine halts. Typically we will try to bound the time complexity as a
function of the size n of the input, defined as the number of cells occupied by the input, excluding the infinite number of blanks that surround it.▼
▲function of the size n of the input, defined as the number of cells occupied
So the size of the input depends on the alphabet used to encode the input of the turing machine but is proportional to the number of bits of the input numbers
Also the [
* For each problem X, it is also assumed that an appropriate notion of problem size is defined for its instances. Formally, this is a function |⋅|:X→N chosen so that the efficiency of a decision algorithm for X will varying uniformly in |x|. As we have seen, if X⊆N, it is standard to take |n|=log2(n) i.e. the number of digits (or length) of the binary numeral ┌n┐ representing n
|