Content deleted Content added
m Fix broken anchor: 2020-11-25T23:03:05Z #The momentum method→Gradient descent#Momentum |
|||
Line 52:
Backpropagation training algorithms fall into three categories:
* [[Gradient descent|steepest descent]] (with variable [[learning rate]] and [[Gradient descent#Momentum or heavy ball method|momentum]], [[Rprop|resilient backpropagation]]);
* quasi-Newton ([[Broyden–Fletcher–Goldfarb–Shanno algorithm|Broyden–Fletcher–Goldfarb–Shanno]], [[Secant method|one step secant]]);
* [[Levenberg–Marquardt algorithm|Levenberg–Marquardt]] and [[Conjugate gradient method|conjugate gradient]] (Fletcher–Reeves update, Polak–Ribiére update, Powell–Beale restart, scaled conjugate gradient).<ref>{{cite conference|author1=M. Forouzanfar|author2=H. R. Dajani|author3=V. Z. Groza|author4=M. Bolic|author5=S. Rajan|name-list-style=amp|date=July 2010|title=Comparison of Feed-Forward Neural Network Training Algorithms for Oscillometric Blood Pressure Estimation|url=https://www.researchgate.net/publication/224173336|conference=4th Int. Workshop Soft Computing Applications|___location=Arad, Romania|publisher=IEEE}}</ref>
|