Content deleted Content added
m →History: add Terlaky |
m Bot: http → https |
||
(19 intermediate revisions by 12 users not shown) | |||
Line 1:
{{Short description|Method for mathematical optimization}}
{{About|an algorithm for mathematical optimization||Criss-cross (disambiguation){{!}}Criss-cross}}
{{Use dmy dates|date=
<!-- {{Context|date=May 2012}} -->
[[File:Unitcube.svg|thumb|right|alt=A three-dimensional cube|The criss-cross algorithm visits all 8 corners of the [[Klee–Minty cube]] in the worst case. It visits 3 additional corners on average. The Klee–Minty cube is a perturbation of the cube shown here.]]
In [[optimization (mathematics)|mathematical optimization]], the '''criss-cross algorithm''' is any of a family of [[algorithm]]s for [[linear programming]]. Variants of the criss-cross algorithm also solve more general problems with [[linear programming|linear inequality constraints]] and [[nonlinear programming|nonlinear]] [[optimization (mathematics)|objective functions]]; there are criss-cross algorithms for [[linear-fractional programming]] problems,<ref name="LF99Hyperbolic">{{harvtxt|Illés|Szirmai|Terlaky|1999}}</ref><ref name="Bibl" >{{cite journal|first=I. M.|last=Stancu-Minasian|title=A sixth bibliography of fractional programming|journal=Optimization|volume=55|number=4|date=August 2006|pages=405–428|doi=10.1080/02331930600819613|mr=2258634|s2cid=62199788}}</ref> [[quadratic programming|quadratic-programming]] problems, and [[linear complementarity problem]]s.<ref name="FukudaTerlaky" >{{harvtxt|Fukuda|Terlaky|1997}}</ref>
Like the [[simplex algorithm]] of [[George Dantzig|George B. Dantzig]], the criss-cross algorithm is not a [[time complexity|polynomial-time algorithm]] for linear programming. Both algorithms visit all 2<sup>''D''</sup> corners of a (perturbed) [[unit cube|cube]] in dimension ''D'', the [[Klee–Minty cube]] (after [[Victor Klee]] and [[George J. Minty]]), in the [[worst-case complexity|worst case]].<ref name="Roos" >{{harvtxt|Roos|1990}}</ref><ref name="KleeMinty"/> However, when it is started at a random corner, the criss-cross algorithm [[Average-case complexity|on average]] visits only ''D'' additional corners.<ref name="FTNamiki"/><ref name="FukudaNamiki"/><ref name="Borgwardt">The simplex algorithm takes on average ''D'' steps for a cube. {{harvtxt|Borgwardt|1987}}: {{cite book|last=Borgwardt|first=Karl-Heinz|title=The simplex method: A probabilistic analysis|series=Algorithms and Combinatorics (Study and Research Texts)|volume=1|publisher=Springer-Verlag|___location=Berlin|year=1987|pages=xii+268|isbn=978-3-540-17096-9|mr=868467
==History==
Line 13 ⟶ 14:
{{See also|Linear programming|Simplex algorithm|Bland's rule}}
[[File:Simplex description.png|thumb|240px|In its second phase, the ''simplex algorithm'' crawls along the edges of the polytope until it finally reaches an optimum [[vertex (geometry)|vertex]]. The ''criss-cross algorithm'' considers bases that are not associated with vertices, so that some iterates can be in the ''interior ''of the feasible region, like interior-point algorithms; the criss-cross algorithm can also have ''infeasible'' iterates ''outside'' the feasible region.]]
In linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the [[simplex algorithm
The criss-cross algorithm is simpler than the simplex algorithm, because the criss-cross algorithm only has one phase. Its pivoting rules are similar to the [[Bland's rule|least-index pivoting rule of Bland]].<ref name="Bland">
{{cite journal|title=New finite pivoting rules for the simplex method|first=Robert G.|last=Bland|journal=Mathematics of Operations Research|volume=2|number=2|date=May 1977|pages=103–107|doi=10.1287/moor.2.2.103|jstor=3689647|mr=459599
While most simplex variants are monotonic in the objective (strictly in the non-degenerate case), most variants of the criss-cross algorithm lack a monotone merit function which can be a disadvantage in practice.
==Description==
The criss-cross algorithm works on a standard pivot tableau (or on-the-fly calculated parts of a tableau, if implemented like the revised simplex method). In a general step, if the tableau is primal or dual infeasible, it selects one of the infeasible rows / columns as the pivot row / column using an index selection rule. An important property is that the selection is made on the union of the infeasible indices and the standard version of the algorithm does not distinguish column and row indices (that is, the column indices basic in the rows). If a row is selected then the algorithm uses the index selection rule to identify a position to a dual type pivot, while if a column is selected then it uses the index selection rule to find a row position and carries out a primal type pivot.
==Computational complexity: Worst and average cases==
The [[time complexity]] of an [[algorithm]] counts the number of [[arithmetic operation]]s sufficient for the algorithm to solve the problem. For example, [[Gaussian elimination]] requires on the [[Big oh|order of]]'' D''<sup>3</sup> operations, and so it is said to have polynomial time-complexity, because its complexity is bounded by a [[cubic polynomial]]. There are examples of algorithms that do not have polynomial-time complexity. For example, a generalization of Gaussian elimination called [[Buchberger's algorithm]] has for its complexity an <!--doubly --> exponential function of the problem data (the [[degree of a polynomial|degree of the polynomial]]s and the number of variables of the [[multivariate polynomial]]s). Because exponential functions eventually grow much faster than polynomial functions, an<!-- attained rather than upper bound --> exponential complexity implies that an algorithm has slow performance on large problems.
Several algorithms for linear programming—[[Khachiyan]]'s [[ellipsoidal algorithm]], [[Karmarkar]]'s [[Karmarkar's algorithm|projective algorithm]], and [[interior-point method|central-path algorithm]]s—have polynomial time-complexity (in the [[worst case complexity|worst case]] and thus [[average case complexity|on average]]). The ellipsoidal and projective algorithms were published before the criss-cross algorithm.
However, like the simplex algorithm of Dantzig, the criss-cross algorithm is ''not'' a polynomial-time algorithm for linear programming. Terlaky's criss-cross algorithm visits all the 2<sup>''D''</sup> corners of a (perturbed) cube in dimension ''D'', according to a paper of Roos; Roos's paper modifies the [[Victor Klee|Klee]]–Minty construction of a [[unit cube|cube]] on which the simplex algorithm takes 2<sup>''D''</sup> steps.<ref name="FukudaTerlaky"/><ref name="Roos"/><ref name="KleeMinty">{{cite book|title=Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin)|editor-first=Oved|editor-last=Shisha|publisher=Academic Press|___location=New York-London|year=1972|mr=332165|last1=Klee|first1=Victor|
When it is initialized at a random corner of the cube, the criss-cross algorithm visits only ''D'' additional corners, however, according to a 1994 paper by [[Komei Fukuda|Fukuda]] and Namiki.<ref name="FTNamiki" >{{harvtxt|Fukuda|Terlaky|1997|p=385}}</ref><ref name="FukudaNamiki" >{{harvtxt|Fukuda|Namiki|1994|p=367}}</ref> Trivially, the simplex algorithm takes on average ''D'' steps for a cube.<ref name="Borgwardt"/><ref>More generally, for the simplex algorithm, the expected number of steps is proportional to ''D'' for linear-programming problems that are randomly drawn from the [[Euclidean metric|Euclidean]] [[unit sphere]], as proved by Borgwardt and by [[Stephen Smale|Smale]].</ref> Like the simplex algorithm, the criss-cross algorithm visits exactly 3 additional corners of the three-dimensional cube on average.
==Variants==
Line 38 ⟶ 37:
===Other optimization problems with linear constraints===
There are variants of the criss-cross algorithm for linear programming, for [[quadratic programming]], and for the [[linear complementarity problem|linear-complementarity problem]] with "sufficient matrices";<ref name="FukudaTerlaky"/><ref name="FTNamiki"/><ref name="FukudaNamikiLCP" >{{harvtxt|Fukuda|Namiki|1994|}}</ref><ref name="OMBook" >{{cite book|
===Vertex enumeration===
The criss-cross algorithm was used in an algorithm for [[Vertex enumeration problem|enumerating all the vertices of a polytope]], which was published by [[David Avis]] and [[Komei Fukuda]] in 1992.<ref>{{harvtxt|Avis|Fukuda|1992|p=297}}</ref> Avis and Fukuda presented an algorithm which finds the ''v'' vertices of a [[polyhedron]] defined by a nondegenerate system of ''n'' [[linear inequality|linear inequalities]] in ''D'' [[dimension (vector space)|dimension]]s (or, dually, the ''v'' [[facet]]s of the [[convex hull]] of ''n'' points in ''D'' dimensions, where each facet contains exactly ''D'' given points) in time [[Big Oh notation|O]](''nDv'') and O(''nD'') [[space complexity|space]].<ref>The ''v'' vertices in a simple arrangement of ''n'' [[hyperplane]]s in ''D'' dimensions can be found in O(''n''<sup>2</sup>''Dv'') time and O(''nD'') [[space complexity]].</ref>
===Oriented matroids===
[[File:max-flow min-cut example.svg|frame|right|The [[max-flow min-cut theorem]] states that the maximum flow through a network is exactly the capacity of its minimum cut. This theorem can be proved using the criss-cross algorithm for oriented matroids.]]
The criss-cross algorithm is often studied using the theory of [[oriented matroid]]s (OMs), which is a [[combinatorics|combinatorial]] abstraction of linear-optimization theory.<ref name="OMBook"/><ref>The theory of [[oriented matroid]]s was initiated by [[R. Tyrrell Rockafellar]]. {{harv|Rockafellar|1969}}:<p>{{cite book
|first=R.
|last=Rockafellar
|
|chapter=The elementary vectors of a subspace of <math>R^N</math> (1967)
|pages=104–127
|editor=[[R. C. Bose]] and T.
|year=1969
|title=Combinatorial Mathematics and its Applications
Line 60 ⟶ 58:
|issue=4
|mr=278972
|chapter-url=http://www.math.washington.edu/~rtr/papers/rtr-ElemVectors.pdf
</ref> Indeed, Bland's pivoting rule was based on his previous papers on oriented-matroid theory. However, Bland's rule exhibits cycling on some oriented-matroid linear-programming problems.<ref name="OMBook"/> The first purely combinatorial algorithm for linear programming was devised by [[Michael J. Todd (mathematician)|Michael J. Todd]].<ref name="OMBook"/><ref name="Todd"/> Todd's algorithm was developed not only for linear-programming in the setting of oriented matroids, but also for [[quadratic programming|quadratic-programming problems]] and [[linear complementarity problem|linear-complementarity problem]]s.<ref name="OMBook"/><ref name="Todd" >{{cite journal|last=Todd|first=Michael
The criss-cross algorithm and its proof of finite termination can be simply stated and readily extend the setting of oriented matroids. The algorithm can be further simplified for ''linear feasibility problems'', that is for [[linear system]]s with [[linear inequality|nonnegative variable]]s; these problems can be formulated for oriented matroids.<ref name="KT91"/> The criss-cross algorithm has been adapted for problems that are more complicated than linear programming: There are oriented-matroid variants also for the quadratic-programming problem and for the linear-complementarity problem.<ref name="FukudaTerlaky"/><ref name="FukudaNamikiLCP"/><ref name="OMBook"/>
Line 74 ⟶ 72:
==Notes==
{{Reflist}}
==References==
* {{cite journal |first1=David |last1=Avis |first2=Komei |last2=Fukuda |
* {{cite journal|first1=Zsolt|last1=Csizmadia|first2=Tibor|last2=Illés|title=New criss-cross type algorithms for linear complementarity problems with sufficient matrices|journal=Optimization Methods and Software|volume=21|year=2006|number=2|pages=247–266|doi=10.1080/10556780500095009|url=http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf|format=pdf<!--|eprint=http://www.tandfonline.com/doi/pdf/10.1080/10556780500095009-->|mr=2195759|s2cid=24418835|access-date=30 August 2011|archive-date=23 September 2015|archive-url=https://web.archive.org/web/20150923211403/http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf|url-status=dead}}
* {{cite journal|last1=Fukuda|first1=Komei|author-link1=Komei Fukuda|last2=Namiki|first2=Makoto|title=On extremal behaviors of Murty's least index method|journal=Mathematical Programming|date=March 1994|pages=365–370|volume=64|number=1|doi=10.1007/BF01582581|mr=1286455|s2cid=21476636}}
* {{cite journal|first1=Komei|last1=Fukuda|
* {{cite journal|first1=D.|last1=den
▲* {{cite journal|first1=D.|last1=den Hertog|first2=C.|last2=Roos|first3=T.|last3=Terlaky|title=The linear complementarity problem, sufficient matrices, and the criss-cross method|journal=Linear Algebra and Its Applications|volume=187|date=1 July 1993|pages=1–14|url=http://core.ac.uk/download/pdf/6714737.pdf|doi=10.1016/0024-3795(93)90124-7|ref=harv|mr=1221693}}
* {{<!-- citation -->cite journal|title=The finite criss-cross method for hyperbolic programming|journal=European Journal of Operational Research|volume=114|number=1|
pages=198–214|year=1999
|first1=Tibor|last1=Illés|first2=Ákos|last2=Szirmai|first3=Tamás|last3=Terlaky|zbl=0953.90055|id=[http://www.cas.mcmaster.ca/~terlaky/files/dut-twi-96-103.ps.gz Postscript preprint]
*{{cite journal|first1=Emil|last1=Klafszky|first2=Tamás|last2=Terlaky|title=The role of pivoting in proving some fundamental theorems of linear algebra|journal=Linear Algebra and Its Applications|volume=151|date=June 1991|pages=97–118|doi=10.1016/0024-3795(91)90356-2
* {{cite journal|last=Roos|first=C.|title=An exponential example for Terlaky's pivoting rule for the criss-cross simplex method|journal=Mathematical Programming|volume=46|year=1990|number=1|series=Series
* {{cite journal|last=Terlaky|first=T.|title=A convergent criss-cross method|journal=Optimization: A Journal of Mathematical Programming and Operations Research|volume=16|year=1985|number=5|pages=683–690|issn=0233-1934|doi=10.1080/02331938508843067
* {{cite journal|last=Terlaky|first=Tamás|
* {{cite journal|last1=Terlaky|first1=Tamás|
* {{cite journal|last=Wang|first=Zhe
==External links==
* [https://web.archive.org/web/20110728105602/http://www.ifor.math.ethz.ch/~fukuda/ Komei Fukuda (ETH Zentrum, Zurich)] with [https://web.archive.org/web/20110728105643/http://www.ifor.math.ethz.ch/~fukuda/publ/publ.html publications]
* [http://coral.ie.lehigh.edu/~terlaky/ Tamás Terlaky (Lehigh University)] with [http://coral.ie.lehigh.edu/~terlaky/publications publications] {{Webarchive|url=https://web.archive.org/web/20110928051231/http://coral.ie.lehigh.edu/~terlaky/publications |date=28 September 2011 }}
{{Mathematical programming|state=expanded}}
|