Content deleted Content added
Rescuing 0 sources and tagging 1 as dead.) #IABot (v2.0 |
|||
Line 44:
where {{tmath|\mathbf I}} is the identity matrix, giving as the increment {{tmath|\boldsymbol\delta}} to the estimated parameter vector {{tmath|\boldsymbol\beta}}.
The (non-negative) damping factor {{tmath|\lambda}} is adjusted at each iteration. If reduction of {{tmath|S}} is rapid, a smaller value can be used, bringing the algorithm closer to the [[Gauss–Newton algorithm]], whereas if an iteration gives insufficient reduction in the residual, {{tmath|\lambda}} can be increased, giving a step closer to the gradient-descent direction. Note that the [[gradient]] of {{tmath|S}} with respect to {{tmath|\boldsymbol\
Levenberg's algorithm has the disadvantage that if the value of damping factor {{tmath|\lambda}} is large, inverting {{tmath|\mathbf J^\text{T}\mathbf J + \lambda\mathbf I}} is not used at all. R. Fletcher provided the insight that we can scale each component of the gradient according to the curvature, so that there is larger movement along the directions where the gradient is smaller. This avoids slow convergence in the direction of small gradient. Therefore, Fletcher in his 1971 paper ''A modified Marquardt subroutine for non-linear least squares'' replaced the identity matrix {{tmath|\mathbf I}} with the diagonal matrix consisting of the diagonal elements of {{tmath|\mathbf J^\text{T}\mathbf J}}, thus making the solution scale invariant:
|