Criss-cross algorithm

This is an old revision of this page, as edited by Kiefer.Wolfowitz (talk | contribs) at 19:48, 22 March 2011 (Properties: ce a paper of Roos; Roos's paper modifies the Klee–Minty construction of a cube on which the simplex algorithm takes 2<sup>''D''</sup> steps).). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematical optimization, the criss-cross algorithm denotes a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear objective functions; there are criss-cross algorithms for linear-fractional programming problems,[1] quadratic-programming problems, and linear complementarity problems.[2]

Like the simplex algorithm of George B. Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. Both algorithms visit all 2D corners of a (perturbed) cube in dimension D, the Klee-Minty cube.[2][3]

Comparison with the simplex algorithm of Dantzig

In linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the simplex algorithm of George Dantzig. The simplex algorithm first finds a (primal-) feasible basis by solving a "phase-one problem"; in "phase two", the simplex algorithm pivots between a sequence of basic feasible solutions so that the objective function is non-decreasing with each pivot, terminating when with an optimal solution (also finally finding a "dual feasible" solution).[2][4]

The criss-cross algorithm is simpler than the simplex algorithm, because the criss-cross algorithm only has one-phase. Its pivoting rules are similar to the least-index pivoting rule of Robert G. Bland. Bland's rule uses only signs of coefficients rather than their (real-number) order when deciding eligible pivots. Bland's rule selects an entering variables by comparing values of reduced costs, using the real-number ordering of the eligible pivots. Unlike Bland's rule, the criss-cross algorithm is "purely combinatorial", selecting an entering variable and a leaving variable by considering only the signs of coefficients rather than their real-number ordering.[2][4]

Properties

Like the simplex algorithm of Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. The original criss-cross algorithm of Terlaky visits all the 2D corners of a (perturbed) cube in dimension D, according to a paper of Roos; Roos's paper modifies the Klee–Minty construction of a cube on which the simplex algorithm takes 2D steps).[2][5][3]

The criss-cross algorithm was published independently by Tamás Terlaky[6] and by Zhe-Min Wang;[7] related algorithms appeared in unpublished reports by other authors. The criss-cross algorithm is often studied using the theory of oriented matroids (OMs), which is a combinatorial abstraction of linear-optimization theory.[8] Indeed, Bland's pivoting rule was based on his previous papers on oriented matroid theory. Much literature on the criss-cross algorithm concerns oriented matroids. Historically, OM algorithms for quadratic-programming problems and linear-complementarity problems had been published by Michael J. Todd before Terlaky and Wang published their criss-cross algorithms.[9] However, Todd's pivoting-rule cycles on nonrealizable oriented matroids (and only on nonrealizable oriented matroids). Such cycling does not stump the OM criss-cross algorithms of Terlaky and Wang, however. There are oriented-matroid variants of the criss-cross algorithm for linear programming, for quadratic programming, and for the linear-complementarity problem.[2][10]

Summary

The criss-cross algorithm is a simply stated algorithm for linear programming. It was the second earliest fully combinatorial algorithm for linear programming; unlike the first algorithm for oriented matroids by Todd, the criss-cross algorithm does not cycle on nonrealizable oriented-oriented matroid problems. It is not a polynomial-time algorithm, however. Researchers have extended the criss-cross algorithm for many optimization-problems, including linear-fractional programming.[1] The criss-cross algorithm can solve quadratic programming problems and linear complementarity problems in the setting of oriented matroids.

See also

  • Jack Edmonds (pioneer of combinatorial optimization and oriented-matroid theorist; doctoral advisor of Komei Fukuda)

Notes

  1. ^ a b Illés, Szirmai & Terlaky (1999)
  2. ^ a b c d e f Fukuda & Terlaky (1997)
  3. ^ a b Klee, Victor; Minty, George J. (1972). "How good is the simplex algorithm?". In Shisha, Oved (ed.). Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin). New York-London: Academic Press. pp. 159–175. MR 0332165. {{cite book}}: Invalid |ref=harv (help)
  4. ^ a b Terlaky & Zhang (1993)
  5. ^ Roos (1990)
  6. ^ Terlaky (1985) and Terlaky (1987)
  7. ^ Wang (1987)
  8. ^ The theory of oriented matroids was initiated by R. Tyrrell Rockafellar to describe the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm; Rockafellar was inspired by Albert W. Tucker studies of such sign patterns in "Tucker tableaux". (Rockafellar 1969)
  9. ^ Todd, Michael J. (1985). "Linear and quadratic programming in oriented matroids". Journal of Combinatorial Theory. Series B. 39 (2): 105–133. doi:10.1016/0095-8956(85)90042-5. MR 0811116.
  10. ^ Björner, Anders; Las Vergnas, Michel; Sturmfels, Bernd; White, Neil; Ziegler, Günter (1999). "10 Linear programming". Oriented Matroids. Cambridge University Press. pp. 417–479. doi:10.1017/CBO9780511586507. ISBN 9780521777506. MR 1744046. {{cite book}}: Cite has empty unknown parameter: |1= (help)

References