Content deleted Content added
No edit summary Tags: Reverted Visual edit Mobile edit Mobile web edit |
Undid revision 1189500238 by 5.217.168.31 (talk) revert erroneous deletion |
||
Line 1:
{{Short description|Optimization algorithm}}
The '''Frank–Wolfe algorithm''' is an [[iterative method|iterative]] [[First-order approximation|first-order]] [[Mathematical optimization|optimization]] [[algorithm]] for [[constrained optimization|constrained]] [[convex optimization]]. Also known as the '''conditional gradient method''',<ref>{{Cite journal | last1 = Levitin | first1 = E. S. | last2 = Polyak | first2 = B. T. | doi = 10.1016/0041-5553(66)90114-5 | title = Constrained minimization methods | journal = USSR Computational Mathematics and Mathematical Physics | volume = 6 | issue = 5 | pages = 1 | year = 1966 }}</ref> '''reduced gradient algorithm''' and the '''convex combination algorithm''', the method was originally proposed by [[Marguerite Frank]] and [[Philip Wolfe (mathematician)|Philip Wolfe]] in 1956.<ref>{{Cite journal | last1 = Frank | first1 = M. | last2 = Wolfe | first2 = P. | doi = 10.1002/nav.3800030109 | title = An algorithm for quadratic programming | journal = Naval Research Logistics Quarterly | volume = 3 | issue = 1–2 | pages = 95–110 | year = 1956 }}</ref> In each iteration, the Frank–Wolfe algorithm considers a [[linear approximation]] of the objective function, and moves towards a minimizer of this linear function (
==Problem statement==
|