Criss-cross algorithm: Difference between revisions

Content deleted Content added
References: {{cite journal|last1=Terlaky|first1=Tamás|<!-- authorlink1=Tamás Terlaky -->|last2=Zhang|first2=Shu Zhong|title=Pivot rules for linear programming: A Survey on recent theoretical developments|issue=Degeneracy in optimization p
Comparison with the simplex algorithm of Dantzig: <ref name="TerlakyZhang">{{harvtxt|Terlaky|Zhang|1993}}</ref>
Line 3:
==Comparison with the simplex algorithm of Dantzig==
{{See also|Linear programming|Simplex algorithm|Bland's rule}}
In linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the [[simplex algorithm]] of [[George Dantzig]]. The simplex algorithm first finds a (primal-) feasible basis by solving a "''phase-one'' problem"; in "phase two", the simplex algorithm pivots between a sequence of basic ''feasible ''solutions so that the objective function is non-decreasing with each pivot, terminating when with an optimal solution (also finally finding a "dual feasible" solution).<ref name="FukudaTerlaky"/><ref name="TerlakyZhang">{{harvtxt|Terlaky|Zhang|1993}}</ref>
 
The criss-cross algorithm is simpler than the simplex algorithm, because the criss-cross algorithm only has one-phase. Its pivoting rules are similar to the [[Bland's rule|least-index pivoting rule]] of [[Robert&nbsp;G. Bland]]. Bland's rule uses only [[sign function|sign]]s of coefficients rather than their [[real number|(real-number) order]] when deciding eligible pivots. Bland's rule selects an entering variables by comparing values of reduced costs, using the real-number ordering of the eligible pivots. Unlike Bland's rule, the criss-cross algorithm is "purely combinatorial", selecting an entering variable and a leaving variable by considering only the signs of coefficients rather than their real-number ordering.<ref name="FukudaTerlaky"/><ref name="TerlakyZhang"/>
 
==Oriented matroids==