Criss-cross algorithm

This is an old revision of this page, as edited by Kiefer.Wolfowitz (talk | contribs) at 00:13, 22 March 2011 (Oriented matroids: <ref>{{harvtxt|Roos|1990}}</ref>). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematical optimization, the criss-cross algorithm denotes a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear objective functions; there are criss-cross algorithms for linear-fractional programming problems, quadratic-programming problemsm, and linear complementarity problems.[1]

Comparison with the simplex algorithm of Dantzig

In linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the simplex algorithm of George Dantzig. The simplex algorithm first finds a (primal-) feasible basis by solving a "phase-one problem"; in "phase two", the simplex algorithm pivots between a sequence of basic feasible solutions so that the objective function is non-decreasing with each pivot, terminating when with an optimal solution (also finally finding a "dual feasible" solution).[1]

The criss-cross algorithm is simpler than the simplex algorithm, because the criss-cross algorithm only has one-phase. Its pivoting rules are similar to the least-index pivoting rule of Robert G. Bland. Bland's rule uses only signs of coefficients rather than their (real-number) order when deciding eligible pivots. Bland's rule selects an entering variables by comparing values of reduced costs, using the real-number ordering of the eligible pivots. Unlike Bland's rule, the criss-cross algorithm is "purely combinatorial", selecting an entering variable and a leaving variable by considering only the signs of coefficients rather than their real-number ordering.[1]

Oriented matroids

The criss-cross algorithm was published independently by Tamás Terlaky[2] and by Zhe-Min Wang;[3] related algorithms appeared in unpublished reports by other authors. Their publications considered linear-programming in the abstract setting of oriented matroids (OMs).[4] Indeed, Bland's pivoting rule was based on his previous papers on oriented matroid theory. Much literature on the criss-cross algorithm concerns oriented matroids. Historically, OM algorithms for quadratic-programming problems and linear-complementarity problems had been published by Michael J. Todd before Terlaky and Wang published their criss-cross algorithms. However, Todd's pivoting-rule cycles on nonrealizable oriented matroids (and only on nonrealizable oriented matroids). Such cycling does not stump the OM criss-cross algorithms of Terlaky and Wang, however. There are oriented-matroid variants of the criss-cross algorithm for linear programming, for quadratic programming, and for the linear-complementarity problem.[1][5]

Like the simplex algorithm of Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. The original criss-cross algorithm of Terlaky visits all the 2D corners of a (perturbed) cube in dimension D, according to a paper of Roos (which modifies the Klee-Minty construction of a cube on which the simplex algorithm takes 2D steps.[1][6]

Summary

The criss-cross algorithm is a simply stated algorithm for linear program of great theoretical interest. It was the second earliest fully combinatorial algorithm for linear programming; unlike the first algorithm for oriented matroids by Todd, the criss-cross algorithm does not cycle on nonrealizable oriented-oriented matroid problems. It is not a polynomial-time algorithm, however. Researchers have extended the criss-cross algorithm for many optimization-problems, including linear-fractional programming. The criss-cross algorithm can solve quadratic programming problems and linear complementarity problems in the setting of oriented matroids. It is not a polynomial time algorithm for linear programming, however.

See also

  • Jack Edmonds (pioneer of combinatorial optimization and oriented-matroid theorist; doctoral advisor of Komei Fukuda)

Notes

  1. ^ a b c d e Fukuda & Terlaky (1997)
  2. ^ Terlaky (1985) and Terlaky (1987)
  3. ^ Wang (1987)
  4. ^ The theory of oriented matroids was initiated by R. Tyrrell Rockafellar to describe the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm; Rockafellar was inspired by Albert W. Tucker studies of such sign patterns in "Tucker tableaux". (Rockafellar 1969)
  5. ^ Björner, Anders; Las Vergnas, Michel; Sturmfels, Bernd; White, Neil; Ziegler, Günter (1999). "10 Linear programming". Oriented Matroids. Cambridge University Press. pp. 417–479. doi:10.1017/CBO9780511586507. ISBN 9780521777506. MR 1744046. {{cite book}}: Cite has empty unknown parameter: |1= (help)
  6. ^ Roos (1990)

References