Quantum neural network: Difference between revisions

Content deleted Content added
removed duplicate link
clean up, typo(s) fixed: ’s → 's; replaced utilizes with uses
Line 34:
Quantum Neural Networks can be theoretically trained similarly to training classical/artificial neural networks. A key difference lies in communication between the layers of a neural networks. For classical neural networks, at the end of a given operation, the current [[perceptron]] copies its output to the next layer of perceptron(s) in the network. However, in a quantum neural network, where each perceptron is a qubit, this would violate the [[no-cloning theorem]].<ref name=":0" /><ref>{{Cite book|last1=Nielsen|first1=Michael A|url=https://www.worldcat.org/oclc/665137861|title=Quantum computation and quantum information|last2=Chuang|first2=Isaac L|date=2010|publisher=Cambridge University Press|isbn=978-1-107-00217-3|___location=Cambridge; New York|language=en|oclc=665137861}}</ref> A proposed generalized solution to this is to replace the classical [[Fan-out (software)|fan-out]] method with an arbitrary [[Unitary matrix|unitary]] that spreads out, but does not copy, the output of one qubit to the next layer of qubits. Using this fan-out Unitary (<math>U_f</math>) with a dummy state qubit in a known state (Ex. <math>|0\rangle</math> in the [[Qubit|computational basis]]), also known as an [[Ancilla bit]], the information from the qubit can be transferred to the next layer of qubits.<ref name="WanDKGK16" /> This process adheres to the quantum operation requirement of [[Reversible computing|reversibility]].<ref name="WanDKGK16" /><ref name=":1">{{Cite journal|last=Feynman|first=Richard P.|date=1986-06-01|title=Quantum mechanical computers|url=https://doi.org/10.1007/BF01886518|journal=Foundations of Physics|language=en|volume=16|issue=6|pages=507–531|doi=10.1007/BF01886518|bibcode=1986FoPh...16..507F|s2cid=122076550|issn=1572-9516}}</ref>
 
Using this quantum feed-forward network, deep neural networks can be executed and trained efficiently. A deep neural network is essentially a network with many hidden-layers, as seen in the sample model neural network above. Since the Quantum neural network being discussed utilizesuses fan-out Unitary operators, and each operator only acts on its respective input, only two layers are used at any given time.<ref name=":0" /> In other words, no Unitary operator is acting on the entire network at any given time, meaning the number of qubits required for a given step depends on the number of inputs in a given layer. Since Quantum Computers are notorious for their ability to run multiple iterations in a short period of time, the efficiency of a quantum neural network is solely dependent on the number of qubits in any given layer, and not on the depth of the network.<ref name=":1" />
 
=== Cost functions ===
To determine the effectiveness of a neural network, a cost function is used, which essentially measures the proximity of the network’snetwork's output to the expected or desired output. In a Classical Neural Network, the weights (<math>w </math>) and biases (<math>b </math>) at each step determine the outcome of the cost function <math>C(w, b)</math>.<ref name=":0" /> When training a Classical Neural network, the weights and biases are adjusted after each iteration, and given equation 1 below, where <math>y(x)</math> is the desired output and <math>a^\text{out}(x)</math> is the actual output, the cost function is optimized when <math>C(w, b)</math>= 0. For a quantum neural network, the cost function is determined by measuring the fidelity of the outcome state (<math>\rho^\text{out}</math>) with the desired outcome state (<math>\phi^\text{out}</math>), seen in Equation 2 below. In this case, the Unitary operators are adjusted after each iteration, and the cost function is optimized when C = 1.<ref name=":0" />
Equation 1 <math>C(w,b)={1 \over N}\sum_{x}{||y(x)-a^\text{out}(x)|| \over 2}</math>