Mehrotra predictor–corrector method: Difference between revisions

Content deleted Content added
#suggestededit-add-desc 1.0
Tags: Mobile edit Mobile app edit Android app edit
Link suggestions feature: 3 links added.
Tags: Visual edit Mobile edit Mobile web edit Newcomer task Suggested: add links
 
(3 intermediate revisions by 3 users not shown)
Line 1:
{{Short description|1989 AlgorithmOptimisation in linear programmingalgorithm}}
'''Mehrotra's predictor–corrector method''' in [[Optimization (mathematics)|optimization]] is a specific [[interior point method]] for [[linear programming]]. It was proposed in 1989 by Sanjay Mehrotra.<ref>{{cite journal|last=Mehrotra|first=S.|title=On the implementation of a primal–dual interior point method|journal=SIAM Journal on Optimization|volume=2|year=1992|issue=4|pages=575–601|doi=10.1137/0802028}}</ref>
 
Line 10:
The complete search direction is the sum of the predictor direction and the corrector direction.
 
Although there is no theoretical complexity bound on it yet, Mehrotra's predictor–corrector method is widely used in practice.<ref>"In 1989, Mehrotra described a practical algorithm for linear programming that remains the basis of most current software; his work appeared in 1992."{{cite journal|last=Potra|first=Florian A.|author2=Stephen J. Wright|title=Interior-point methods|journal=Journal of Computational and Applied Mathematics|volume=124|year=2000|issue=1–2|pages=281–302|doi=10.1016/S0377-0427(00)00433-7|doi-access=|bibcode=2000JCoAM.124..281P }}</ref> Its corrector step uses the same [[Cholesky decomposition]] found during the predictor step in an effective way, and thus it is only marginally more expensive than a standard interior point algorithm. However, the additional overhead per iteration is usually paid off by a reduction in the number of iterations needed to reach an optimal solution. It also appears to converge very fast when close to the optimum.
 
== Derivation ==
The derivation of this section follows the outline by Nocedal and Wright.<ref name=":0">{{Cite book|title=Numerical Optimisation|lastlast1=Nocedal|firstfirst1=Jorge|last2=Wright|first2=Stephen J.|publisher=Springer|year=2006|isbn=978-0387-30303-1|___location=United States of America|pages=392–417, 448–496}}</ref>
 
=== Predictor step - Affine scaling direction ===
Line 44:
\end{align}</math>
 
The predictor-corrector method then works by using [[Newton's method]] to obtain the [[affine scaling]] direction. This is achieved by solving the following system of linear equations
 
<math>J(x,\lambda,s) \begin{bmatrix} \Delta x^\text{aff}\\\Delta\lambda^\text{aff} \\\Delta s^\text{aff}\end{bmatrix} = -F(x,\lambda,s)</math>
Line 107:
 
== Step lengths ==
In practical implementations, a version of [[line search]] is performed to obtain the maximal step length that can be taken in the search direction without violating nonnegativity, <math>(x,s) \geq 0</math>.<ref name=":0" />
 
== Adaptation to Quadratic Programming ==
Line 118:
{{DEFAULTSORT:Mehrotra predictor-corrector method}}
[[Category:Optimization algorithms and methods]]
[[Category:Linear programming]]