Frank–Wolfe algorithm: Difference between revisions

Content deleted Content added
Isheden (talk | contribs)
Original reference improved
m Various citation & identifier cleanup, plus AWB genfixes. Report errors and suggestions at User talk:CitationCleanerBot.
Line 1:
In [[mathematical optimization]], the '''reduced gradient method''' of Frank and Wolfe is an [[iterative method]] for [[nonlinear programming]]. Also known as the '''Frank–Wolfe algorithm''' and the ''convex combination algorithm'', the reduced gradient method was proposed by Marguerite Frank and [[Phil Wolfe]] in&nbsp;1956<ref>{{cite journal |last1=Frank |first1=M. |last2=Wolfe |first2=P. |title=An algorithm for quadratic programming |url=http://onlinelibrary.wiley.com/doi/10.1002/nav.3800030109/pdf |journal=Naval Research Logistics Quarterly |volume=3 |number=1-2 |year=1956 |pages=95–110|doi=10.1002/nav.3800030109}}</ref> as an [[algorithm]] for [[quadratic programming]]. In [[simplex algorithm|phase one]], the reduced gradient method finds a [[linear programming#feasible solution|feasible solution]] to the [[linear programming#feasible region|linear constraints]], if one exists. Thereafter, at each iteration, the method takes a [[gradient descent|descent step]] in the [[gradient descent|negative gradient direction]], so reducing the [[objective function]]; this [[gradient descent]] step is "reduced" to remain in the [[polyhedron|polyhedral]] feasible region of the linear constraints. Because quadratic programming is a generalization of linear programming, the reduced gradient method is a generalization of Dantzig's [[simplex algorithm]] for [[linear programming]].
 
The reduced gradient method is an iterative method for [[nonlinear programming]], a method that need not be restricted to quadratic programming. While the method is slower than competing methods and has been abandoned as a general purpose method of nonlinear programming, it remains widely used for specially structured problems of large scale optimization. In particular, the reduced gradient method remains popular and effective for finding approximate [[flow network|minimum–cost flow]]s in [[transportation network]]s, which often have enormous size.
Line 10:
 
==Algorithm==
 
 
'''Step 1.''' Initialization. Let <math>k \leftarrow 0</math> and let <math>x_k \!</math> be any point in <math>\mathbf{P}</math>.
Line 26 ⟶ 25:
 
==Comments==
 
 
The algorithm generally makes good progress towards the optimum during the first few iterations, but convergence often slows down substantially when close to the minimum point. For this reason the algorithm is perhaps best used to find an approximate solution. It can be shown that the worst case convergence rate is [[sublinear]]; however, in practice the convergence rate has been observed to improve in case of many constraints.<ref>"Nonlinear Programming", Dimitri Bertsekas, 2003, page 222. Athena Scientific, ISBN 1-886529-00-0.</ref>
 
==References==
 
*[http://www.math.chalmers.se/Math/Grundutb/CTH/tma946/0203/fw_eng.pdf The Frank-Wolfe algorithm] description
*[http://swutc.tamu.edu/publications/technicalreports/472840-00074-1.pdf Combined Traffic Signal Control and Traffic Assignment: Algorithms, Implementation and Numerical Results], Chungwon Lee and [[Randy B. Machemehl]], University of Texas at Austin, March 2005
Line 37 ⟶ 34:
== Notes ==
<references/>
 
{{Optimization algorithms|convex}}
 
[[Category:Optimization methods]]
Line 42 ⟶ 41:
 
[[nl:Frank–Wolfe algoritme]]
 
{{Optimization algorithms|convex}}