In mathematical optimization, the reduced gradient method of Frank and Wolfe is an iterative method for nonlinear programming. Also known as the Frank–Wolfe algorithm and the convex combination algorithm, the reduced gradient method was proposed by Marguerite Frank and Phil Wolfe in 1956[1] as an algorithm for quadratic programming. In phase one, the reduced gradient method finds a feasible solution to the linear constraints, if one exists. Thereafter, at each iteration, the method takes a descent step in the negative gradient direction, so reducing the objective function; this gradient descent step is "reduced" to remain in the polyhedral feasible region of the linear constraints. Because quadratic programming is a generalization of linear programming, the reduced gradient method is a generalization of Dantzig's simplex algorithm for linear programming.
The reduced gradient method is an iterative method for nonlinear programming, a method that need not be restricted to quadratic programming. While the method is slower than competing methods and has been abandoned as a general purpose method of nonlinear programming, it remains widely used for specially structured problems of large scale optimization. In particular, the reduced gradient method remains popular and effective for finding approximate minimum–cost flows in transportation networks, which often have enormous size.
Problem statement
- Minimize
- subject to .
Where the n×n matrix E is positive semidefinite, h is an n×1 vector, and represents a feasible region defined by a mix of linear inequality and equality constraints (for example Ax ≤ b, Cx = d).
Algorithm
Initialization. Let and let be any point in .
Step 1. Direction-finding subproblem. The approximation of the problem that is obtained by replacing the function f with its first-order Taylor expansion around is found. Solve for :
- Minimize
- Subject to
Step 2. Step size determination. Set , or alternatively find that minimizes subject to .
Step 3. Update. Let , let and go back to Step 2.
Comments
The algorithm generally makes good progress towards the optimum during the first few iterations, but convergence often slows down substantially when close to the minimum point. For this reason the algorithm is perhaps best used to find an approximate solution. It can be shown that the worst case convergence rate is sublinear; however, in practice the convergence rate has been observed to improve in case of many constraints.[2]
References
- The Frank-Wolfe algorithm description
(The formulation(2) in this article should take away f(xk) to make the following discussion valid and understandable.)
- Combined Traffic Signal Control and Traffic Assignment: Algorithms, Implementation and Numerical Results, Chungwon Lee and Randy B. Machemehl, University of Texas at Austin, March 2005
Notes
- ^ Frank, M.; Wolfe, P. (1956). "An algorithm for quadratic programming". Naval Research Logistics Quarterly. 3 (1–2): 95–110. doi:10.1002/nav.3800030109.
- ^ "Nonlinear Programming", Dimitri Bertsekas, 2003, page 222. Athena Scientific, ISBN 1-886529-00-0.