Revised simplex method

This is an old revision of this page, as edited by Kxx (talk | contribs) at 15:11, 12 March 2014 (Numerical example: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematical optimization, the revised simplex method is an variant of George Dantzig's simplex method for linear programming.

The revised simplex method is mathematically equivalent to the standard simplex method but differs in implementation. Instead of maintaining a tableau which explicitly represents the constraints adjusted to a set of basic variables, it maintains a representation of a basis of the matrix representing the constraints. The matrix-oriented approach allows for greater computational efficiency by enabling sparse matrix operations.[1]

Problem formulation

For the rest of the discussion, it is assumed that a linear programming problem has been converted into the following standard form:

 

where  . Without loss of generality, it is assumed that the constraint matrix   has full row rank and that the problem is feasible, i.e., there is at least one   such that  . If   is rank-deficient, either there are redundant constraints, or the problem is infeasible. Both situations can be handled by a presolve step.

Algorithmic description

Optimality conditions

For linear programming, the Karush–Kuhn–Tucker conditions are both necessary and sufficient for optimality. The KKT conditions of a linear programming problem in the standard form is

 

where   and   are the Lagrange multipliers associated with the constraints   and  , respectively.[2] The last condition, which is equivalent to   for all  , is called the complementary slackness condition.

By what is sometimes known as the fundamental theorem of linear programming, a vertex   of the feasible polytope can be identified by be a basis   of   chosen from the latter's columns.[a] Since   has full rank,   is nonsingular. Without loss of generality, assume that  . Then   is given by

 

where  . Partition   and   accordingly into

 

To satisfy the complementary slackness condition, let  . It follows that

 

which implies that

 

If   at this point, the KKT conditions are satisfied, and thus   is optimal.

Pivot operation

If the KKT conditions are violated, a pivot operation consisting of introducing a column of   into the basis at the expense of an existing column in   is performed. In the absence of degeneracy, a pivot operation always results in a strict decrease in  . Therefore, if the problem is bounded, the revised simplex method must terminate at an optimal vertex after repeated pivot operations because there are only a finite number of vertices.[4]

Select an index   such that   as the entering index. The corresponding column of  ,  , will be moved into the basis, and   will be allowed to increase from zero. It can be shown that

 

i.e., every unit increase in   will results in a decrease by   in  .[5] Since

 

  must be correspondingly decreased by   subject to  . Let  . If  , no matter how much   is increased,   will stay nonnegative. Hence,   can be arbitrarily decreased, and thus the problem is unbounded. Otherwise, select an index   as the leaving index. This choice effectively increases   from zero until   is reduced to zero while maintaining feasibility. The pivot operation concludes with replacing   with   in the basis.

Numerical example

Practical issues

Degeneracy

Because the revised simplex method is mathematically equivalent to the simplex method, it also suffers from degeneracy, where a pivot operation does not results in a decrease in  , and a chain of pivot operations causes the basis to cycle. A perturbation or lexicographic strategy can be used to prevent cycling and guarantee termination.[6]

Basis representation

Two linear systems involving   are present in the revised simplex method:

 

Instead of refactorizing  , usually an LU factorization is directly updated after each pivot operation, for which purpose there exist several strategies such as the Bartels−Golub method. However, the amount of data representing the updates as well as numerical errors builds up over time and makes periodic refactorization necessary.[1][7]

Notes and references

Notes

  1. ^ The same theorem also states that the feasible polytope has at least one vertex and that there is at least one vertex which is optimal.[3]

References

  1. ^ a b Morgan 1997, §2.
  2. ^ Nocedal & Wright 2006, p. 358, Eq. 13.4.
  3. ^ Nocedal & Wright 2006, p. 363, Theorem 13.2.
  4. ^ Nocedal & Wright 2006, p. 370, Theorem 13.4.
  5. ^ Nocedal & Wright 2006, p. 369, Eq. 13.24.
  6. ^ Nocedal & Wright 2006, p. 381, §13.5.
  7. ^ Nocedal & Wright 2006, p. 372, §13.4.

Bibliography

  • Morgan, S. S. (1997). A Comparison of Simplex Method Algorithms (MSc thesis). University of Florida. {{cite thesis}}: Invalid |ref=harv (help)
  • Nocedal, J.; Wright, S. J. (2006). Mikosch, T. V.; Resnick, S. I.; Robinson, S. M. (eds.). Numerical Optimization. Springer Series in Operations Research and Financial Engineering (2nd ed.). New York, NY, USA: Springer. ISBN 978-0-387-30303-1. {{cite book}}: Invalid |ref=harv (help)