Quantum neural network: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Altered template type. Add: class, eprint. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Artificial neural networks | #UCB_Category 97/159
add the part of Barren plateaus
Tags: Reverted Visual edit
Line 41:
 
Equation 2 <math>C ={1 \over N}\sum_{x}^N{\langle\phi^\text{out}|\rho^\text{out}|\phi^\text{out}\rangle}</math>
 
=== Barren plateaus ===
[[File:Barren plateaus of VQA.webp|alt=The Barren Plateau problem becomes increasingly serious as the VQA expands|thumb|'''Barren plateaus of VQA'''<ref>{{Cite journal |last=Wang |first=Samson |last2=Fontana |first2=Enrico |last3=Cerezo |first3=M. |last4=Sharma |first4=Kunal |last5=Sone |first5=Akira |last6=Cincio |first6=Lukasz |last7=Coles |first7=Patrick J. |date=2021-11-29 |title=Noise-induced barren plateaus in variational quantum algorithms |url=https://www.nature.com/articles/s41467-021-27045-6 |journal=Nature Communications |language=en |volume=12 |issue=1 |doi=10.1038/s41467-021-27045-6 |issn=2041-1723 |pmc=PMC8630047 |pmid=34845216}}</ref>
 
Figure shows the Barren Plateau problem becomes increasingly serious as the VQA expands.]]
Gradient descent is widely used and successful in classical algorithms. However, although the simplified structure is very similar to neural networks such as CNNs, QNNs perform much worse.
 
Since the quantum space exponentially expands as the q-bit grows, the observations will concentrate around the mean value at an exponential rate, where also have exponentially small gradients.<ref>{{Cite journal |last=McClean |first=Jarrod R. |last2=Boixo |first2=Sergio |last3=Smelyanskiy |first3=Vadim N. |last4=Babbush |first4=Ryan |last5=Neven |first5=Hartmut |date=2018-11-16 |title=Barren plateaus in quantum neural network training landscapes |url=https://www.nature.com/articles/s41467-018-07090-4 |journal=Nature Communications |language=en |volume=9 |issue=1 |doi=10.1038/s41467-018-07090-4 |issn=2041-1723 |pmc=PMC6240101 |pmid=30446662}}</ref>
 
This situation is known as Barren Plateaus, because most of the initial parameters are trapped on a "plateau" of almost zero gradient, which approximates random wandering rather than gradient descent. This makes the model untrainable.
 
In fact, not only QNN, but almost all deeper VQA algorithms have this problem. In the present NISQ era, this is one of the problems that have to be solved if more applications are to be made of the various VQA algorithms, including QNN.
 
==See also==