Content deleted Content added
added Differentiable computing navbox |
Rescuing 0 sources and tagging 1 as dead.) #IABot (v2.0.9.5 |
||
(45 intermediate revisions by 23 users not shown) | |||
Line 1:
{{short description|Quantum Mechanics in
[[File:Neural Network - basic scheme with legends.png|thumb|
'''Quantum neural networks''' are
Most Quantum neural networks are developed as [[Feedforward neural network|feed-forward]] networks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits.<ref name=":0">{{Cite journal|last1=Beer|first1=Kerstin|last2=Bondarenko|first2=Dmytro|last3=Farrelly|first3=Terry|last4=Osborne|first4=Tobias J.|last5=Salzmann|first5=Robert|last6=Scheiermann|first6=Daniel|last7=Wolf|first7=Ramona|date=2020-02-10|title=Training deep quantum neural networks|url=
== Examples ==
Line 11:
=== Quantum perceptrons ===
A lot of proposals attempt to find a quantum equivalent for the [[perceptron]] unit from which neural nets are constructed. A problem is that nonlinear activation functions do not immediately correspond to the mathematical structure of quantum theory, since a quantum evolution is described by linear operations and leads to probabilistic observation. Ideas to imitate the perceptron activation function with a quantum mechanical formalism reach from special measurements
=== Quantum networks ===
At a larger scale, researchers have attempted to generalize neural networks to the quantum setting. One way of constructing a quantum neuron is to first generalise classical neurons and then generalising them further to make unitary gates. Interactions between neurons can be controlled quantumly, with [[unitary operator|unitary]] [[quantum logic gate|gates]], or classically, via [[measurement in quantum mechanics|measurement]] of the network states. This high-level theoretical technique can be applied broadly, by taking different types of networks and different implementations of quantum neurons, such as [[Integrated quantum photonics|photonically]] implemented neurons<ref name="WanDKGK16">{{cite journal|last1=Wan|first1=Kwok-Ho|last2=Dahlsten|first2=Oscar|last3=Kristjansson|first3=Hler|last4=Gardner|first4=Robert|last5=Kim|first5=Myungshik|year=2017|title=Quantum generalisation of feedforward neural networks|journal=
Quantum neural networks can be applied to algorithmic design: given [[qubits]] with tunable mutual interactions, one can attempt to learn interactions following the classical [[backpropagation]] rule from a [[training set]] of desired input-output relations, taken to be the desired output algorithm's behavior.<ref>{{cite journal |first1=J. |last1=Bang |display-authors=1 |first2=Junghee |last2=Ryu |first3=Seokwon |last3=Yoo |first4=Marcin |last4=Pawłowski |first5=Jinhyoung |last5=Lee |doi=10.1088/1367-2630/16/7/073017 |title=A strategy for quantum algorithm design assisted by machine learning |journal=New Journal of Physics |volume=16 |issue= 7|pages=073017 |year=2014 |arxiv=1301.1132 |bibcode=2014NJPh...16g3017B |s2cid=55377982 }}</ref><ref>{{cite journal |first1=E. C. |last1=Behrman |first2=J. E. |last2=Steck |first3=P. |last3=Kumar |first4=K. A. |last4=Walsh |arxiv=0808.1558 |title=Quantum Algorithm design using dynamic learning |journal=Quantum Information and Computation |volume=8 |issue=1–2 |pages=12–29 |year=2008 |doi=10.26421/QIC8.1-2-2 |s2cid=18587557 }}</ref> The quantum network thus ‘learns’ an algorithm.
=== Quantum associative memory ===
The first quantum associative memory algorithm was introduced by Dan Ventura and Tony Martinez in 1999.<ref>{{cite
The first truly content-addressable quantum memory, which can retrieve patterns also from corrupted inputs, was proposed by Carlo A. Trugenberger.<ref>{{Cite journal |last=Trugenberger |first=C. A. |date=2001-07-18 |title=Probabilistic Quantum Memories |url=http://dx.doi.org/10.1103/physrevlett.87.067901 |journal=Physical Review Letters |volume=87 |issue=6 |article-number=067901 |doi=10.1103/physrevlett.87.067901 |pmid=11497863 |issn=0031-9007|arxiv=quant-ph/0012100 |bibcode=2001PhRvL..87f7901T |s2cid=23325931 }}</ref><ref name=":2">{{Cite journal |last=Trugenberger |first=Carlo A. |date=2002 |title=Quantum Pattern Recognition |journal=Quantum Information Processing |volume=1 |issue=6 |pages=471–493|doi=10.1023/A:1024022632303 |arxiv=quant-ph/0210176 |bibcode=2002QuIP....1..471T |s2cid=1928001 }}</ref><ref>{{Cite journal |last=Trugenberger |first=C. A. |date=2002-12-19 |title=Phase Transitions in Quantum Pattern Recognition |url=http://dx.doi.org/10.1103/physrevlett.89.277903 |journal=Physical Review Letters |volume=89 |issue=27 |article-number=277903 |doi=10.1103/physrevlett.89.277903 |pmid=12513243 |issn=0031-9007|arxiv=quant-ph/0204115 |bibcode=2002PhRvL..89A7903T |s2cid=33065081 }}</ref> Both memories can store an exponential (in terms of n qubits) number of patterns but can be used only once due to the no-cloning theorem and their destruction upon measurement.
Trugenberger,<ref name=":2" /> however, has shown that his probabilistic model of quantum associative memory can be efficiently implemented and re-used multiples times for any polynomial number of stored patterns, a large advantage with respect to classical associative memories.
=== Classical neural networks inspired by quantum theory ===
A substantial amount of interest has been given to a “quantum-inspired” model that uses ideas from quantum theory to implement a neural network based on [[fuzzy logic]].<ref>{{cite journal |first1=G. |last1=Purushothaman |first2=N. |last2=Karayiannis |url=https://pdfs.semanticscholar.org/fe11/93d386f42358e7cf9b1f71bf33e7ddd945b5.pdf |archive-url=https://web.archive.org/web/20170911115935/https://pdfs.semanticscholar.org/fe11/93d386f42358e7cf9b1f71bf33e7ddd945b5.pdf |url-status=dead |archive-date=2017-09-11 |title=Quantum Neural Networks (QNN's): Inherently Fuzzy Feedforward Neural Networks |journal=IEEE Transactions on Neural Networks |volume=8 |issue=3 |pages=679–93 |year=1997 |doi=10.1109/72.572106 |pmid=18255670 |s2cid=1634670 }}</ref>
== Training ==
Quantum Neural Networks can be theoretically trained similarly to training classical/
Using this quantum feed-forward network, deep neural networks can be executed and trained efficiently. A deep neural network is essentially a network with many hidden-layers, as seen in the sample model neural network above. Since the Quantum neural network being discussed
To determine the effectiveness of a neural network, a cost function is used, which essentially measures the proximity of the
Equation 1 <math>C(w,b)={1 \over N}\sum_{x}{||y(x)-a^\text{out}(x)|| \over 2}</math>▼
Equation 2 <math>C ={1 \over N}\sum_{x}^N{\langle\phi^\text{out}|\rho^\text{out}|\phi^\text{out}\rangle}</math>▼
=== Barren plateaus ===
[[File:Barren_plateaus_of_VQA.webp|alt=The Barren Plateau problem becomes increasingly serious as the VQA expands|thumb|'''Barren plateaus of VQA'''<ref>{{Cite journal |last1=Wang |first1=Samson |last2=Fontana |first2=Enrico |last3=Cerezo |first3=M. |last4=Sharma |first4=Kunal |last5=Sone |first5=Akira |last6=Cincio |first6=Lukasz |last7=Coles |first7=Patrick J. |date=2021-11-29 |title=Noise-induced barren plateaus in variational quantum algorithms |journal=Nature Communications |language=en |volume=12 |issue=1 |page=6961 |arxiv=2007.14384 |bibcode=2021NatCo..12.6961W |doi=10.1038/s41467-021-27045-6 |issn=2041-1723 |pmc=8630047 |pmid=34845216}}</ref> Figure shows the Barren Plateau problem becomes increasingly serious as the VQA expands.]]
Gradient descent is widely used and successful in classical algorithms. However, although the simplified structure is very similar to neural networks such as CNNs, QNNs perform much worse.
Since the quantum space exponentially expands as the q-bit grows, the observations will concentrate around the mean value at an exponential rate, where also have exponentially small gradients.<ref name=":3">{{Cite journal |last1=McClean |first1=Jarrod R. |last2=Boixo |first2=Sergio |last3=Smelyanskiy |first3=Vadim N. |last4=Babbush |first4=Ryan |last5=Neven |first5=Hartmut |date=2018-11-16 |title=Barren plateaus in quantum neural network training landscapes |journal=Nature Communications |language=en |volume=9 |issue=1 |page=4812 |arxiv=1803.11173 |bibcode=2018NatCo...9.4812M |doi=10.1038/s41467-018-07090-4 |issn=2041-1723 |pmc=6240101 |pmid=30446662}}</ref>
▲Using this quantum feed-forward network, deep neural networks can be executed and trained efficiently. A deep neural network is essentially a network with many hidden-layers, as seen in the sample model neural network above. Since the Quantum neural network being discussed utilizes fan-out Unitary operators, and each operator only acts on its respective input, only two layers are used at any given time.<ref name=":0" /> In other words, no Unitary operator is acting on the entire network at any given time, meaning the number of qubits required for a given step depends on the number of inputs in a given layer. Since Quantum Computers are notorious for their ability to run multiple iterations in a short period of time, the efficiency of a quantum neural network is solely dependent on the number of qubits in any given layer, and not on the depth of the network.<ref name=":1" />
This situation is known as Barren Plateaus, because most of the initial parameters are trapped on a "plateau" of almost zero gradient, which approximates random wandering<ref name=":3" /> rather than gradient descent. This makes the model untrainable.
▲=== Cost Functions ===
▲To determine the effectiveness of a neural network, a cost function is used, which essentially measures the proximity of the network’s output to the expected or desired output. In a Classical Neural Network, the weights (<math>w </math>) and biases (<math>b </math>) at each step determine the outcome of the cost function <math>C(w, b)</math>.<ref name=":0" /> When training a Classical Neural network, the weights and biases are adjusted after each iteration, and given equation 1 below, where <math>y(x)</math> is the desired output and <math>a^{out}(x)</math> is the actual output, the cost function is optimized when <math>C(w, b)</math>= 0. For a quantum neural network, the cost function is determined by measuring the fidelity of the outcome state (<math>\rho^{out}</math>) with the desired outcome state (<math>\phi^{out}</math>), seen in Equation 2 below. In this case, the Unitary operators are adjusted after each iteration, and the cost function is optimized when C = 1.<ref name=":0" />
▲ Equation 1 <math>C(w,b)={1 \over N}\sum_{x}{||y(x)-a^{out}(x)|| \over 2}</math>
In fact, not only QNN, but almost all deeper VQA algorithms have this problem. In the present [[Noisy intermediate-scale quantum era|NISQ era]], this is one of the problems that have to be solved if more applications are to be made of the various VQA algorithms, including QNN.
▲ Equation 2 <math>C ={1 \over N}\sum_{x}^N{\langle\phi^{out}|\rho^{out}|\phi^{out}\rangle}</math>
==See also==
*[[Differentiable programming]]
*[[Optical neural network]]
*[[Holographic associative memory]]
Line 58 ⟶ 73:
[[Category:Artificial neural networks]]
[[Category:Quantum information science]]
[[Category:Quantum programming]]
|