Content deleted Content added
No edit summary |
→Algorithm or Procedure?: Reply |
||
(One intermediate revision by one other user not shown) | |||
Line 192:
For a black-box problem, the implementation inside the black box should not matter. Computationally speaking, the function could just as well be implemented mechanically, using wooden gears, as long as it maps each input to a unique output. Using the same reasoning, I could design a classical setup where one of two computers is equipped with an X-ray camera and then define the problem such that the black box must not contain lead shielding—giving the X-ray-equipped computer an unfair advantage.
Shouldn't the concept of an algorithm be independent of such physically constrained advantages? <!-- Template:Unsigned IP --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/2001:1C06:2704:D500:DF17:1D45:B3B2:2107|2001:1C06:2704:D500:DF17:1D45:B3B2:2107]] ([[User talk:2001:1C06:2704:D500:DF17:1D45:B3B2:2107#top|talk]]) 12:37, 7 March 2025 (UTC)</small> <!--Autosigned by SineBot-->
:(1) regarding ''runs faster'': The statement "algorithm X runs exponentially faster than algorithm Y" is (to my understanding) the standard way to express that "there is an exponential separation between the (time-, query-, ...)complexity classes to which X and Y belong, respectively", i.e., if S(X,N) and S(Y,N) denote the number of steps needed to run the algorithm for an N-bit input, then for large enough N the ratio S(Y,N)/S(X,N) grows faster than an exponential in N one would say that "X is exponentially faster than Y". In the DJ case, we deal with a black-box algorithm and the S(X,N) is measured in term of the number of queries needed (also known as "query complexity" - but here it's the same as time-complexity in a setting where "query the oracle" if one allowed gate). So it's only a statement about the actual run time ''for sufficiently large input'' when any effects of sub-leding terms, prefactors (due to specifics of the implementation such as programming language, gate set,...) become unimportant<br>
:(2) ''regarding the implementation of the black box'': It is correct, that the implementation of the black box shouldn't matter (and it does not), but when comparing quantum and classical algorithms one needs to be precise about the black box: we need a "quantum black box", i.e., one that not only produces <math>\mathcal{B}(|j\rangle=|f(j)\rangle</math> for all basis states of the register but also <math>\mathcal{B}_f(\sum_j c_j |j\rangle=\sum_j c_j |f(j)\rangle</math>. It doesn't matter is this is built from (quantum-)mechanical oscillators, superconducting qubits or photons, as long as it realizes the above action on all possible input states. One can run both quantum and classical algorithms using this oracle and the quantum algorithm will win because the classical one cannot query the black box with superposition states (they are not available to a classical algorithm). This implies, that ''for the classical algorithm'' the black box <math>\mathcal{B}</math> is equivalent to a classical black box that just realizes <math>\mathcal{B}_f^{(c)}(\sum_j u_j |j\rangle=\sum_j |u_j|^2 |f(j)\rangle\langle f(j)|</math>. When probing the two black boxes only with states <math>|j\rangle</math> they seem to be identical, but they are not. In particular, one cannot run the DJ algorithm on the classical black box <math>\mathcal{B}_c</math> (one can, but it doesn't work). So that is the setting for which the quantum advantage is claimed: provided with the black box <math>\mathcal{B}_f</math> and the promise that ''f'' is either constant or balanced, we can determine its properties with exponentially fewer queries with a quantum algorithm than with a classical one. --[[User:Qcomp|Qcomp]] ([[User talk:Qcomp|talk]]) 13:37, 7 March 2025 (UTC)
|