TheIn [[mathematical optimization]], the '''Frank–Wolfereduced algorithmgradient method''',alsoof Frank and Wolfe is an iterative method for nonlinear programming. Also known as the ''convex combination'Frank–Wolfe algorithm'',' isand athe classic''convex combination algorithm'', inthe [[operations research]] (OR).reduced gradient Itmethod was originally proposed by Marguerite Frank and Phil Wolfe in 1956 as aan procedure[[algorithm]] for solving [[quadratic programming]] problems. withThe method is initialized by solving a [[linear programming#feasible solution|feasible solution]] to the [[linear programming#feasible region|linear constraints]]. At each stepiteration, the objectivemethod functiontakes isa linearized[[gradient and then adescent|descent step]] isin takenthe in[[gradient adescent|negative gradient direction]], thatso reducesreducing the [[objective whilefunction]]; maintainingthis feasibility.[[gradient descent]] Thestep algorithmis can"reduced" beto seenmaintain asremain a generalization ofin the primal [[simplex algorithmpolyhedron|polyhedral]] forfeasible region of the [[linear programming]]constraints. Because Inquadratic recentprogramming yearsis ita hasgeneralization beenof widelylinear usedprogramming, forthe determiningreduced thegradient equilibriummethod flowsis ina transportationgeneralization networksof Dantzig's [[simplex algorithm]] for [[linear programming]].
More generally, the reduced gradient method can be used on nonlinear programming problems in addition to quadratic programming problems. While the method is slower than competing methods and has been abandoned as a general purpose method of nonlinear programming, it remains widely used for specially structured problems of large scale optimization. In particular, the reduced gradient method remains popular and effective for finding approximate [[flow network|minimum–cost flow]]s in [[transportation network]]s, which often have hundreds of thousands of nodes.