Criss-cross algorithm: Difference between revisions

Content deleted Content added
m Bot: http → https
 
(8 intermediate revisions by 8 users not shown)
Line 1:
{{Short description|Method for mathematical optimization}}
{{About|an algorithm for mathematical optimization||Criss-cross (disambiguation){{!}}Criss-cross}}
{{Use dmy dates|date=DecemberJanuary 20132024}}
<!-- {{Context|date=May 2012}} -->
[[File:Unitcube.svg|thumb|right|alt=A three-dimensional cube|The criss-cross algorithm visits all&nbsp;8 corners of the [[Klee–Minty cube]] in the worst case. It visits&nbsp;3 additional corners on&nbsp;average. The Klee–Minty cube is a perturbation of the cube shown here.]]
Line 13 ⟶ 14:
{{See also|Linear programming|Simplex algorithm|Bland's rule}}
[[File:Simplex description.png|thumb|240px|In its second phase, the ''simplex algorithm'' crawls along the edges of the polytope until it finally reaches an optimum [[vertex (geometry)|vertex]]. The ''criss-cross algorithm'' considers bases that are not associated with vertices, so that some iterates can be in the ''interior ''of the feasible region, like interior-point algorithms; the criss-cross algorithm can also have ''infeasible'' iterates ''outside'' the feasible region.]]
In linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the [[simplex algorithm]] of [[George Dantzig]]. The simplex algorithm first finds a (primal-) feasible basis by solving a "''phase-one'' problem"; in "phase two", the simplex algorithm pivots between a sequence of basic ''feasible ''solutions so that the objective function is non-decreasing with each pivot, terminating with an optimal solution (also finally finding a "dual feasible" solution).<ref name="FukudaTerlaky"/><ref name="TerlakyZhang">{{harvtxt|Terlaky|Zhang|1993}}</ref>
 
The criss-cross algorithm is simpler than the simplex algorithm, because the criss-cross algorithm only has one phase. Its pivoting rules are similar to the [[Bland's rule|least-index pivoting rule of Bland]].<ref name="Bland">
{{cite journal|title=New finite pivoting rules for the simplex method|first=Robert G.|last=Bland|journal=Mathematics of Operations Research|volume=2|number=2|date=May 1977|pages=103–107|doi=10.1287/moor.2.2.103|jstor=3689647|mr=459599}}</ref> Bland's rule uses only [[sign function|sign]]s of coefficients rather than their [[real number#Axiomatic approach|(real-number) order]] when deciding eligible pivots. Bland's rule selects an entering variables by comparing values of reduced costs, using the real-number ordering of the eligible pivots.<ref name="Bland"/><ref>Bland's rule is also related to an earlier least-index rule, which was proposed by Katta&nbsp;G. Murty for the [[linear complementarity problem]], according to {{harvtxt|Fukuda|Namiki|1994}}.</ref> Unlike Bland's rule, the criss-cross algorithm is "purely combinatorial", selecting an entering variable and a leaving variable by considering only the signs of coefficients rather than their real-number ordering.<ref name="FukudaTerlaky"/><ref name="TerlakyZhang"/> The criss-cross algorithm has been applied to furnish constructive proofs of basic results in [[linear algebra]], such as <!-- [[Steinitz's theorem|Steinitz's lemma]], --> the [[Farkas lemma|lemma of Farkas]]<!-- , [[Weyl's theorem]] on the finite generation of [[convex polytope]]s by linear inequalities ([[halfspace]]s), and the [[Krein–Milman theorem|Minkowski's theorem]] on [[extreme point]]s -->.<ref name="KT91" >{{harvtxt|Klafszky|Terlaky|1991}}</ref>
 
While most simplex variants are monotonic in the objective (strictly in the non-degenerate case), most variants of the criss-cross algorithm lack a monotone merit function which can be a disadvantage in practice.
 
==Description==
{{Expand section|date=April 2011}}
The criss-cross algorithm works on a standard pivot tableau (or on-the-fly calculated parts of a tableau, if implemented like the revised simplex method). In a general step, if the tableau is primal or dual infeasible, it selects one of the infeasible rows / columns as the pivot row / column using an index selection rule. An important property is that the selection is made on the union of the infeasible indices and the standard version of the algorithm does not distinguish column and row indices (that is, the column indices basic in the rows). If a row is selected then the algorithm uses the index selection rule to identify a position to a dual type pivot, while if a column is selected then it uses the index selection rule to find a row position and carries out a primal type pivot.
 
==Computational complexity: Worst and average cases==
[[File:Ellipsoid 2.png|thumb|right<!-- 400px -->|The worst-case computational complexity of Khachiyan's ''ellipsoidal algorithm'' is a polynomial. The ''criss-cross algorithm'' has exponential complexity.]]
The [[time complexity]] of an [[algorithm]] counts the number of [[arithmetic operation]]s sufficient for the algorithm to solve the problem. For example, [[Gaussian elimination]] requires on the [[Big oh|order&nbsp;of]]''&nbsp;D''<sup>3</sup> operations, and so it is said to have polynomial time-complexity, because its complexity is bounded by a [[cubic polynomial]]. There are examples of algorithms that do not have polynomial-time complexity. For example, a generalization of Gaussian elimination called [[Buchberger's algorithm]] has for its complexity an <!--doubly --> exponential function of the problem data (the [[degree of a polynomial|degree of the polynomial]]s and the number of variables of the [[multivariate polynomial]]s). Because exponential functions eventually grow much faster than polynomial functions, an<!-- attained rather than upper bound --> exponential complexity implies that an algorithm has slow performance on large problems.
 
Line 38 ⟶ 37:
 
===Other optimization problems with linear constraints===
There are variants of the criss-cross algorithm for linear programming, for [[quadratic programming]], and for the [[linear complementarity problem|linear-complementarity problem]] with "sufficient matrices";<ref name="FukudaTerlaky"/><ref name="FTNamiki"/><ref name="FukudaNamikiLCP" >{{harvtxt|Fukuda|Namiki|1994|}}</ref><ref name="OMBook" >{{cite book|last1=Björner|first1=Anders|last2=Las Vergnas|first2=Michel|author2-link=Michel Las Vergnas|last3=Sturmfels|first3=Bernd|author-link3=Bernd Sturmfels|last4=White|first4=Neil|last5=Ziegler|first5=Günter|author-link5=Günter M. Ziegler|title=Oriented Matroids|chapter=10 Linear programming|publisher=Cambridge University Press|year=1999|isbn=978-0-521-77750-6|pages=417–479|doi=10.1017/CBO9780511586507|mr=1744046}}</ref><ref name="HRT">{{cite journal|first1=D. |last1=den Hertog|first2=C.|last2=Roos|first3=T.|last3=Terlaky|title=The linear complementarity problem, sufficient matrices, and the criss-cross method|journal=Linear Algebra and Its Applications|volume=187|date=1 July 1993|pages=1–14|url=httphttps://core.ac.uk/download/pdf/6714737.pdf|doi=10.1016/0024-3795(93)90124-7|doi-access=free}}</ref><ref name="CIsufficient">{{cite journal|first1=Zsolt|last1=Csizmadia|first2=Tibor|last2=Illés|title=New criss-cross type algorithms for linear complementarity problems with sufficient matrices|journal=Optimization Methods and Software|volume=21|year=2006|number=2|pages=247–266|doi=10.1080/10556780500095009 |url=http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf|format=pdf<!--|eprint=http://www.tandfonline.com/doi/pdf/10.1080/10556780500095009-->|mr=2195759|s2cid=24418835|access-date=30 August 2011|archive-date=23 September 2015|archive-url=https://web.archive.org/web/20150923211403/http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf|url-status=dead}}</ref> conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix.<ref name="HRT"/><ref name="CIsufficient"/> A [[sufficient&nbsp;matrix]] is a generalization both of a [[positive-definite matrix]] and of a [[P-matrix]], whose [[principal&nbsp;minor]]s are each positive.<ref name="HRT"/><ref name="CIsufficient"/><ref>{{cite journal|last1=Cottle|first1=R. W.|author-link1=Richard W. Cottle|last2=Pang|first2=J.-S.|last3=Venkateswaran|first3=V.|title=Sufficient matrices and the linear complementarity problem|journal=Linear Algebra and Its Applications|volume=114–115|date=March–April 1989|pages=231–249|doi=10.1016/0024-3795(89)90463-1|mr=986877|doi-access=free}}</ref> The criss-cross algorithm has been adapted also for [[linear-fractional programming]].<ref name="LF99Hyperbolic"/><ref name="Bibl"/>
 
===Vertex enumeration===
Line 60 ⟶ 59:
|mr=278972
|chapter-url=http://www.math.washington.edu/~rtr/papers/rtr-ElemVectors.pdf |id=[http://www.math.washington.edu/~rtr/papers/rtr-ElemVectors.pdf PDF reprint]}}</p><p>Rockafellar was influenced by the earlier studies of [[Albert W. Tucker]] and [[George J. Minty]]. Tucker and Minty had studied the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm.</p>
</ref> Indeed, Bland's pivoting rule was based on his previous papers on oriented-matroid theory. However, Bland's rule exhibits cycling on some oriented-matroid linear-programming problems.<ref name="OMBook"/> The first purely combinatorial algorithm for linear programming was devised by [[Michael J. Todd (mathematician)|Michael&nbsp;J. Todd]].<ref name="OMBook"/><ref name="Todd"/> Todd's algorithm was developed not only for linear-programming in the setting of oriented matroids, but also for [[quadratic programming|quadratic-programming problems]] and [[linear complementarity problem|linear-complementarity problem]]s.<ref name="OMBook"/><ref name="Todd" >{{cite journal|last=Todd|first=Michael J.|author-link=Michael J. Todd (mathematician)|title=Linear and quadratic programming in oriented matroids|journal=Journal of Combinatorial Theory|series=Series B|volume=39|year=1985|number=2|pages=105–133|mr=811116|doi=10.1016/0095-8956(85)90042-5|doi-access=free}}</ref> Todd's algorithm is complicated even to state, unfortunately, and its finite-convergence proofs are somewhat complicated.<ref name="OMBook"/>
 
The criss-cross algorithm and its proof of finite termination can be simply stated and readily extend the setting of oriented matroids. The algorithm can be further simplified for ''linear feasibility problems'', that is for [[linear system]]s with [[linear inequality|nonnegative variable]]s; these problems can be formulated for oriented matroids.<ref name="KT91"/> The criss-cross algorithm has been adapted for problems that are more complicated than linear programming: There are oriented-matroid variants also for the quadratic-programming problem and for the linear-complementarity problem.<ref name="FukudaTerlaky"/><ref name="FukudaNamikiLCP"/><ref name="OMBook"/>
Line 73 ⟶ 72:
 
==Notes==
{{Reflist}}
<references/>
 
==References==
* {{cite journal |first1=David |last1=Avis |first2=Komei |last2=Fukuda |author-link2=Komei Fukuda |author-link1=David Avis |title=A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra|journal=[[Discrete and Computational Geometry]] |volume=8 |date=December 1992 |pages=295–313 |doi=10.1007/BF02293050 |issue=ACM Symposium on Computational Geometry (North Conway, NH, 1991) number 1 |mr=1174359|doi-access=free }}
* {{cite journal|first1=Zsolt|last1=Csizmadia|first2=Tibor|last2=Illés|title=New criss-cross type algorithms for linear complementarity problems with sufficient matrices|journal=Optimization Methods and Software|volume=21|year=2006|number=2|pages=247–266|doi=10.1080/10556780500095009|url=http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf|format=pdf<!--|eprint=http://www.tandfonline.com/doi/pdf/10.1080/10556780500095009-->|mr=2195759|s2cid=24418835|access-date=30 August 2011|archive-date=23 September 2015|archive-url=https://web.archive.org/web/20150923211403/http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf|url-status=dead}}
url=http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf|format=pdf<!--|eprint=http://www.tandfonline.com/doi/pdf/10.1080/10556780500095009--> |mr=2195759|s2cid=24418835}}
* {{cite journal|last1=Fukuda|first1=Komei|author-link1=Komei Fukuda|last2=Namiki|first2=Makoto|title=On extremal behaviors of Murty's least index method|journal=Mathematical Programming|date=March 1994|pages=365–370|volume=64|number=1|doi=10.1007/BF01582581|mr=1286455|s2cid=21476636}}
* {{cite journal|first1=Komei|last1=Fukuda| author-link1=Komei Fukuda |first2=Tamás|last2=Terlaky| author-link2=Tamás Terlaky |title=Criss-cross methods: A fresh view on pivot algorithms |journal=Mathematical Programming, Series B|volume=79|pages=369–395|issue=Papers from the 16th International Symposium on Mathematical Programming held in Lausanne, 1997, number 1–3 |editor1-first=Thomas M.|editor1-last=Liebling|editor2-first=Dominique|editor2-last=de Werra|year=1997|doi=10.1007/BF02614325|mr=1464775|id=[http://www.cas.mcmaster.ca/~terlaky/files/crisscross.ps Postscript preprint]|citeseerx=10.1.1.36.9373|s2cid=2794181}}
* {{cite journal|first1=D.|last1=den Hertog|first2=C.|last2=Roos|first3=T.|last3=Terlaky|title=The linear complementarity problem, sufficient matrices, and the criss-cross method|journal=Linear Algebra and Its Applications|volume=187|date=1 July 1993|pages=1–14|url=httphttps://core.ac.uk/download/pdf/6714737.pdf|doi=10.1016/0024-3795(93)90124-7|mr=1221693|doi-access=free}}
* {{<!-- citation -->cite journal|title=The finite criss-cross method for hyperbolic programming|journal=European Journal of Operational Research|volume=114|number=1|
pages=198–214|year=1999<!-- |issn=0377-2217 -->|doi=10.1016/S0377-2217(98)00049-6|url=http://www.sciencedirect.com/science/article/B6VCT-3W3DFHB-M/2/4b0e2fcfc2a71e8c14c61640b32e805a
Line 88 ⟶ 86:
* {{cite journal|last=Roos|first=C.|title=An exponential example for Terlaky's pivoting rule for the criss-cross simplex method|journal=Mathematical Programming|volume=46|year=1990|number=1|series=Series A|pages=79–84|doi=10.1007/BF01585729|mr=1045573|s2cid=33463483}}<!-- Google scholar reported no free versions -->
* {{cite journal|last=Terlaky|first=T.|title=A convergent criss-cross method|journal=Optimization: A Journal of Mathematical Programming and Operations Research|volume=16|year=1985|number=5|pages=683–690|issn=0233-1934|doi=10.1080/02331938508843067|mr=798939}}<!-- Google scholar reported no free versions -->
* {{cite journal|last=Terlaky|first=Tamás|author-link=Tamás Terlaky|title=A finite crisscross method for oriented matroids|volume=42|year=1987|number=3|pages=319–327|journal=Journal of Combinatorial Theory|series=Series B|issn=0095-8956|doi=10.1016/0095-8956(87)90049-9|mr=888684|doi-access=free}}<!-- Google scholar reported no free versions -->
* {{cite journal|last1=Terlaky|first1=Tamás| author-link1=Tamás Terlaky |last2=Zhang|first2=Shu Zhong|title=Pivot rules for linear programming: A Survey on recent theoretical developments|issue=Degeneracy in optimization problems, number 1 |journal=Annals of Operations Research|volume=46–47|year=1993|pages=203–233 |doi=10.1007/BF02096264|mr=1260019 |citeseerx = 10.1.1.36.7658 |s2cid=6058077| orig-year = 1991 |issn=0254-5330}}
* {{cite journal|last=Wang|first=Zhe Min|title=A finite conformal-elimination free algorithm over oriented matroid programming|journal=Chinese Annals of Mathematics (Shuxue Niankan B Ji)|series=Series B|volume=8|year=1987|number=1|pages=120–125|issn=0252-9599|mr=886756}}<!-- Google scholar reported no free versions -->
Line 94 ⟶ 92:
==External links==
* [https://web.archive.org/web/20110728105602/http://www.ifor.math.ethz.ch/~fukuda/ Komei Fukuda (ETH Zentrum, Zurich)] with [https://web.archive.org/web/20110728105643/http://www.ifor.math.ethz.ch/~fukuda/publ/publ.html publications]
* [http://coral.ie.lehigh.edu/~terlaky/ Tamás Terlaky (Lehigh University)] with [http://coral.ie.lehigh.edu/~terlaky/publications publications] {{Webarchive|url=https://web.archive.org/web/20110928051231/http://coral.ie.lehigh.edu/~terlaky/publications |date=28 September 2011 }}
 
{{Mathematical programming|state=expanded}}