Quantum optimization algorithms: Difference between revisions

Content deleted Content added
Wording
Line 2:
{{Use American English|date=January 2019}}
 
'''Quantum optimization algorithms''' are [[quantum algorithms]] that are used to solve optimization problems.<ref>{{cite journal|last1=Moll|first1=Nikolaj|last2=Barkoutsos|first2=Panagiotis|last3=Bishop|first3=Lev S.|last4=Chow|first4=Jerry M.|last5=Cross|first5=Andrew|last6=Egger|first6=Daniel J.|last7=Filipp|first7=Stefan|last8=Fuhrer|first8=Andreas|last9=Gambetta|first9=Jay M.|last10=Ganzhorn|first10=Marc|last11=Kandala|first11=Abhinav|last12=Mezzacapo|first12=Antonio|last13=Müller|first13=Peter|last14=Riess|first14=Walter|last15=Salis|first15=Gian|last16=Smolin|first16=John|last17=Tavernelli|first17=Ivano|last18=Temme|first18=Kristan|title=Quantum optimization using variational algorithms on near-term quantum devices|journal=Quantum Science and Technology|date=2018|volume=3|issue=3|pages= 030503|doi=10.1088/2058-9565/aab822|arxiv=1710.01022|bibcode=2018QS&T....3c0503M|s2cid=56376912}}</ref> [[Mathematical optimization]] deals with finding the best solution to a problem (according to some criteria) from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as [[mechanics]], [[economics]] and [[engineering]], and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. The power of [[quantumQuantum computing]] may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.
 
==Quantum data fitting==
Line 10:
One of the most common types of data fitting is solving the [[least squares]] problem, minimizing the sum of the squares of differences between the data points and the fitted function.
 
The algorithm is given as input <math> N </math> input data points <math> (x_1, y_1), (x_2, y_2), ... , (x_N, y_N)</math> and <math> M </math> [[continuous function]]s <math> f_{1}, f_{2}, ... , f_{M} </math>. The algorithm finds and gives as output a continuous function <math> f_{\vec{\lambda}} </math> that is a [[linear combination]] of <math> f_j</math>:
 
:<math>
Line 16:
</math>
In other words, the algorithm finds the [[complex number|complex]] coefficients <math>\lambda_j </math>, and thus finds the vector <math> \vec{\lambda} = (\lambda_1, \lambda_2, ..., \lambda_M) </math>.
 
The algorithm is aimed at minimizing the error, which is given by:
Line 24:
f_{j}(x_i)\lambda_{j}-y_i \right\vert^2 = \left\vert F\vec{\lambda}-\vec{y} \right\vert^2
</math>
where we define <math> F </math> is defined to be the following matrix:
 
:<math>{F} = \begin{pmatrix}