Levenberg–Marquardt algorithm: Difference between revisions

Content deleted Content added
m Category:Optimization algorithms
m The solution: prevent juxtaposition of two formulas, and spelling
Line 24:
 
In each iteration step, the parameter vector '''p''' is replaced by a new estimate '''p'''+'''q'''. To determine '''q''', the functions ''f''<sub>''i''</sub>('''p'''+'''q''') are approximated by their linearizations
:'''f'''</sub>('''p'''+'''q''') &approxasymp; '''f'''<sub>''i''</sub>('''p''') + '''J'''<sup>T</sup>'''q'''
where '''J''' is the [[Jacobian]] of '''f''' at '''p'''.
 
At a minimum of the sum of squares ''S'', we have &nabla;<sub>'''q'''</sub>''S''=0. With the above linearization, this leads to the following equation
:('''J'''<sup>T</sup>'''J''')'''q''' = -'''J'''<sup>T</sup>'''f'''
from which '''q''' can be obtained by inverting '''J'''<sup>T</sup>'''J'''.
The key to the LMA is to replace this equation by a 'damped version'
:('''J'''<sup>T</sup>'''J''' + &lambda;)'''q''' = -'''J'''<sup>T</sup>'''f'''.
The (non-negative) damping factor &lambda; is adjusted at each iteration. If reduction of S is rapid a smaller value can be used bringing the algorithm closer to the GNA, whereas if an iteration gives insufficient reduction in the residual &lambda; can be increased giving a step closer to the gradient descent direction. A similar damping factor appears in [[Tikhonov regularization]], which is used to slovesolve linear ill-posed problems.
 
If a retrieved step length or the reduction of sum of squares to the latest parameter vector '''p''' fall short to predefiened limits, the iteration is aborted and the last paramter vector '''p''' is considered to be the solution.