Content deleted Content added
Martinjaggi (talk | contribs) →Algorithm: more concise algorithm description, and added 3d vizualization |
Martinjaggi (talk | contribs) more general description, and sparse greedy reference. the original paper in section 6 already proposes the algorithm for general convex functions (not only QPs) |
||
Line 1:
While competing methods such as [[gradient descent]] for constrained optimization require a [[Projection (mathematics)|projection step]] back to the feasible set in each iteration, the Frank–Wolfe algorithm only needs the solution of a linear-problem over the same set in each iteration, and automatically stays in the feasible set.
The convergence of the Frank–Wolfe algorithm is sublinear in general, the error to the optimum is <math>O(1/k)</math> after k iterations.
The iterates of the algorithm can always be represented as a sparse convex combination of the extreme points of the feasible set, which has helped to the popularity of the algorithm for sparse greedy optimization in [[machine learning]] and [[signal processing]] problems<ref>{{cite doi|10.1145/1824777.1824783}}</ref>, as well as for example the optimization of [[flow network|minimum–cost flow]]s in [[transportation network]]s.
==Problem statement==
:Minimize <math> f(\mathbf{x})
:subject to <math> \mathbf{x} \
Where the
==Algorithm==
[[File:Frank-Wolfe-Algorithm.png|thumbnail|right|A step of the Frank-Wolfe algorithm]]
Initialization. Let <math>k \leftarrow 0</math> and let <math>
'''Step 1.''' Direction-finding subproblem. The approximation of the problem that is obtained by replacing the function f with its first-order [[Taylor series|Taylor expansion]] around <math>
:Minimize <math>\mathbf{s}^T \nabla f(
:Subject to <math>\mathbf{s} \in \mathcal{D}</math>
'''Step 2.''' Step size determination. Set <math>\gamma \leftarrow \frac{k}{k+2}</math>, or alternatively find <math>\gamma \!</math> that minimizes <math> f(
'''Step 3.''' Update. Let <math>
==Comments==
If the feasible set is given by a set of linear constraints, then the subproblem to be solved in each iteration becomes a [[linear programming|linear program]].
The algorithm generally makes good progress towards the optimum during the first few iterations, but convergence often slows down substantially when close to the minimum point. For this reason the algorithm is perhaps best used to find an approximate solution. It can be shown that the worst case convergence rate is [[sublinear]]; however, in practice the convergence rate has been observed to improve in case of many constraints.<ref>"Nonlinear Programming", Dimitri Bertsekas, 2003, page 222. Athena Scientific, ISBN 1-886529-00-0.</ref>
Line 28 ⟶ 31:
==References==
*[http://www.math.chalmers.se/Math/Grundutb/CTH/tma946/0203/fw_eng.pdf The Frank-Wolfe algorithm] description
*[http://swutc.tamu.edu/publications/technicalreports/472840-00074-1.pdf Combined Traffic Signal Control and Traffic Assignment: Algorithms, Implementation and Numerical Results], Chungwon Lee and [[Randy B. Machemehl]], University of Texas at Austin, March 2005
|