Talk:Computational complexity theory: Difference between revisions

Content deleted Content added
Line 522:
I will undo the change 03:53, 19 May 2018‎ of 69.172.150.202. The user removed the bold text from the following paragraph.
 
* {{quote|Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of '''the size of the instance. This is usually taken to be''' the size of the input '''in bits'''.}}
 
So the text was changed to
 
* {{quote|Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the input.}}
 
The user added the followind comment to justify the edit:
 
* "{{quote|Measuring the size of the instance "in bits" is a misleading and unfounded generalization. The same paragraph goes on to give an example where the size is measured on the number of vertices (not the number of bits needed to represent the number of vertices)."}}
 
The original text was brought to the attention of the user by me in a discussion of a [https://codereview.stackexchange.com/a/194617/12462 post] at codereview.stackexchange.com. Ihe user than told me:
 
* {{quote|Nobody thinks that way. That was a misleading and uncited statement on Wikipedia. Now it doesn't say that anymore.}}(https://codereview.stackexchange.com/questions/194611/hackerrank-challenge-apple-and-orange#comment374709_194617)
and changed this wiki text as described above.
Line 541:
James Aspnes says in [http://www.cs.yale.edu/homes/aspnes/classes/468/notes.pdf Notes on Computational Complexity Theory, CPSC 468/568: Spring 2017, page 7]
* {{quote|The time complexity of an execution is the number of steps until the machine halts. Typically we will try to bound the time complexity as a
function of the size n of the input, defined as the number of cells occupied by the input, excluding the infinite number of blanks that surround it.}}
 
So the size of the input depends on the alphabet used to encode the input of the turing machine but is proportional to the number of bits of the input numbers
Line 549:
Also the [https://plato.stanford.edu/entries/computational-complexity/#BasCon Stanford Encyclopedia of Philosophy] says
* {{quote|For each problem X, it is also assumed that an appropriate notion of problem size is defined for its instances. Formally, this is a function |⋅|:X→N chosen so that the efficiency of a decision algorithm for X will varying uniformly in |x|. As we have seen, if X⊆N, it is standard to take |n|=log2(n) i.e. the number of digits (or length) of the binary numeral ┌n┐ representing n}}
These are two citations for the original statement.