In [[mathematical optimization]], the '''reduced gradient method''' of Frank and Wolfe is an [[iterative method]] for [[nonlinear programming]]. Also known as the '''Frank–Wolfe algorithm''' and the ''convex combination algorithm'', the reduced gradient method was proposed by Marguerite Frank and [[Phil Wolfe]] in 1956 as an [[algorithm]] for solving [[quadratic programming]] problems. TheIn method[[simplex isalgorithm|phase initializedone]], bythe reduced gradient method solvingfinds a [[linear programming#feasible solution|feasible solution]] to the [[linear programming#feasible region|linear constraints]], if one exists. AtThereafter, at each iteration, the method takes a [[gradient descent|descent step]] in the [[gradient descent|negative gradient direction]], so reducing the [[objective function]]; this [[gradient descent]] step is "reduced" to maintain remain in the [[polyhedron|polyhedral]] feasible region of the linear constraints. Because quadratic programming is a generalization of linear programming, the reduced gradient method is a generalization of Dantzig's [[simplex algorithm]] for [[linear programming]].
More generally, theThe reduced gradient method canis bean usediterative onmethod for [[nonlinear programming]], problemsa inmethod additionthat need not be restricted to quadratic programming problems. While the method is slower than competing methods and has been abandoned as a general purpose method of nonlinear programming, it remains widely used for specially structured problems of large scale optimization. In particular, the reduced gradient method remains popular and effective for finding approximate [[flow network|minimum–cost flow]]s in [[transportation network]]s, which often have hundreds of thousands ofenormous nodessize.