Talk:Computational complexity theory: Difference between revisions

Content deleted Content added
added paragraph "Input size"
Line 532:
* "Measuring the size of the instance "in bits" is a misleading and unfounded generalization. The same paragraph goes on to give an example where the size is measured on the number of vertices (not the number of bits needed to represent the number of vertices)."
 
The original text was brought to the attention of the user by me in a discussion of a [https://codereview.stackexchange.com/a/194617/12462| post] at codereview.stackexchange.com. Ihe user than told me:
 
* [Nobody thinks that way. That was a misleading and uncited statement on Wikipedia. Now it doesn't say that anymore.](https://codereview.stackexchange.com/questions/194611/hackerrank-challenge-apple-and-orange#comment374709_194617)
and changed this wiki text as described above.
The reason given for the change is nonsense.
James Aspnes says in [http://www.cs.yale.edu/homes/aspnes/classes/468/notes.pdf Notes on Computational Complexity Theory, CPSC 468/568: Spring 2017, page 7}(http://www.cs.yale.edu/homes/aspnes/classes/468/notes.pdf)]
* The time complexity of an execution is the number of steps until the machine halts. Typically we will try to bound the time complexity as a
function of the size n of the input, defined as the number of cells occupied by the input, excluding the infinite number of blanks that surround it.
machine halts. Typically we will try to bound the time complexity as a
function of the size n of the input, defined as the number of cells occupied
by the input, excluding the infinite number of blanks that surround it.
 
So the size of the input depends on the alphabet used to encode the input of the turing machine but is proportional to the number of bits of the input numbers
 
 
Also the [Stanford Encyclopedia of Philosophy](https://plato.stanford.edu/entries/computational-complexity/#BasCon) Stanford Encyclopedia of Philosophy] says
* For each problem X, it is also assumed that an appropriate notion of problem size is defined for its instances. Formally, this is a function |⋅|:X→N chosen so that the efficiency of a decision algorithm for X will varying uniformly in |x|. As we have seen, if X⊆N, it is standard to take |n|=log2(n) i.e. the number of digits (or length) of the binary numeral ┌n┐ representing n