Content deleted Content added
External links: add Archive.org link to simplex-m |
|||
(237 intermediate revisions by more than 100 users not shown) | |||
Line 1:
{{Short description|Algorithm for linear programming}}
{{about|the linear programming algorithm|the non-linear optimization heuristic|Nelder–Mead method}}
<!-- {{Context|date=March 2012}} -->
In [[optimization (mathematics)|mathematical optimization]], [[George Dantzig|Dantzig]]'s '''simplex algorithm''' (or '''simplex method''') is a popular [[algorithm]] for [[linear programming]].<ref name="Murty">{{cite book |last=Murty |first=Katta G. |
The name of the algorithm is derived from the concept of a [[simplex]] and was suggested by [[Theodore Motzkin|T. S. Motzkin]].<ref name="Murty22" >{{harvtxt|Murty|1983|loc=Comment 2.2}}</ref> Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial ''[[cone (geometry)|cone]]s'', and these become proper simplices with an additional constraint.<ref name="Murty39">{{harvtxt|Murty|1983|loc=Note 3.9}}</ref><ref name="StoneTovey">{{cite journal|last1=Stone|first1=Richard E.|last2=Tovey|first2=Craig A.|title=The simplex and projective scaling algorithms as iteratively reweighted least squares methods|journal=SIAM Review|volume=33|year=1991|issue=2|pages=220–237
|mr=1124362|jstor=2031142|doi=10.1137/1033049}}</ref><ref>{{cite journal|last1=Stone|first1=Richard E.|last2=Tovey|first2=Craig A.|title=Erratum: The simplex and projective scaling algorithms as iteratively reweighted least squares methods|journal=SIAM Review|volume=33|year=1991|issue=3|pages=461|mr=1124362|doi=10.1137/1033100|jstor=2031443
==
[[George Dantzig]] worked on planning methods for the US Army Air Force during World War II using a [[Mechanical_calculator#1900s_to_1970s|desk calculator]]. During 1946, his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of [[Wassily Leontief]], however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized.<ref>{{Cite journal|url = https://apps.dtic.mil/sti/pdfs/ADA112060.pdf|archive-url = https://web.archive.org/web/20150520183722/http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA112060|url-status = live|archive-date = May 20, 2015|title = Reminiscences about the origins of linear programming|date = April 1982|journal = Operations Research Letters|doi = 10.1016/0167-6377(82)90043-8|volume = 1|issue = 2 |pages=43–48|last1 = Dantzig|first1 = George B.}}</ref> Development of the simplex method was evolutionary and happened over a period of about a year.<ref>{{Cite journal|url = http://www.phpsimplex.com/en/Dantzig_interview.htm|title = An Interview with George B. Dantzig: The Father of Linear Programming|last = Albers and Reid|date = 1986|journal = College Mathematics Journal|volume = 17|issue = 4|doi = 10.1080/07468342.1986.11972971|pages = 292–314}}</ref>
{{further2|[[Linear programming]]}}▼
After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems that [[George Dantzig#Education|he had mistaken]] as homework in his professor [[Jerzy Neyman]]'s class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence of [[Lagrange multipliers]] for general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form of [[Lebesgue integral]]s. Dantzig later published his "homework" as a thesis to earn his doctorate. The column geometry used in this thesis gave Dantzig insight that made him believe that the Simplex method would be very efficient.<ref>{{Cite encyclopedia|url = http://apps.dtic.mil/dtic/tr/fulltext/u2/a182708.pdf|archive-url = https://web.archive.org/web/20150529003047/http://www.dtic.mil/dtic/tr/fulltext/u2/a182708.pdf|url-status = live|archive-date = May 29, 2015|title = Origins of the simplex method|last = Dantzig|first = George|date = May 1987|encyclopedia = A History of Scientific Computing|editor-last=Nash|editor-first=Stephen G.|publisher=Association for Computing Machinery|pages = 141–151|doi = 10.1145/87252.88081|isbn = 978-0-201-50814-7}}</ref>
==Overview==
[[Image:Simplex-description-en.svg|thumb|240px|A [[system of linear inequalities]] defines a [[polytope]] as a feasible region. The simplex algorithm begins at a starting [[vertex (geometry)|vertex]] and moves along the edges of the polytope until it reaches the vertex
of the
[[Image:Simplex-method-3-dimensions.png|thumb|240px|Polyhedron of simplex algorithm in 3D]]
The simplex algorithm operates on linear programs in
::<math>\mathbf{c}^T \cdot \mathbf{x}</math>▼
::<math>\mathbf{A}\mathbf{x} = \mathbf{b},\, x_i \ge 0</math>▼
with <math>\scriptstyle x \;=\; (x_1,\, \dots,\, x_n)</math> the variables of the problem, <math>\scriptstyle c \;=\; (c_1,\, \dots,\, c_n)</math> are the coefficients of the objective function, ''A'' a ''p×n'' matrix, and <math>\scriptstyle b \;=\; (b_1,\, \dots,\, b_p)</math> constants with <math>\scriptstyle b_j\geq 0</math>. There is a straightforward process to convert any linear program into one in standard form so this results in no loss of generality.▼
▲:
▲with <math>\
It can be shown that for a linear program in standard form, if the objective function has a
It can also be shown that, if an extreme point is not a
The solution of a linear program is accomplished in two steps. In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the linear program is called ''infeasible''. In the second step, Phase II, the simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded
==Standard form==
Line 38:
:<math>x_1 \ge 5</math>
a new variable,
:<math> \begin{align} y_1 = x_1 - 5\\x_1 = y_1 + 5 \end{align}</math>
The second equation may be used to eliminate
Second, for each remaining inequality constraint, a new variable, called a ''[[slack variable]]'', is introduced to change the constraint to an equality constraint. This variable represents the difference between the two sides of the inequality and is assumed to be
:<math> \begin{align}
x_2 + 2x_3 &\le 3\\
Line 56:
\end{align}</math>
It is much easier to perform algebraic manipulation on inequalities in this form. In inequalities where ≥ appears such as the second one, some authors refer to the variable introduced as a {{anchor|Surplus variable}}'' surplus variable''.
Third, each unrestricted variable is eliminated from the linear program. This can be done in two ways, one is by solving for the variable in one of the equations in which it appears and then eliminating the variable by substitution. The other is to replace the variable with the difference of two restricted variables. For example, if
:<math>\begin{align}
&z_1 = z_1^+ - z_1^-\\
Line 64:
\end{align}</math>
The equation may be used to eliminate
When this process is complete the feasible region will be in the form
:<math>\mathbf{A}\mathbf{x} = \mathbf{b},\, \forall \ x_i \ge 0</math>
It is also useful to assume that the rank of
==
A linear program in standard form can be represented as a ''tableau'' of the form
:<math>
\begin{bmatrix}
1 & -\mathbf{c}^T & 0 \\
\mathbf{0} & \mathbf{A} & \mathbf{b}
\end{bmatrix}
</math>
The first row defines the objective function and the remaining rows specify the constraints.
Conversely, given a basic feasible solution, the columns corresponding to the nonzero variables can be expanded to a nonsingular matrix. If the corresponding tableau is multiplied by the inverse of this matrix then the result is a tableau in canonical form.<ref>{{harvtxt|Murty|1983|loc=section 3.12}}</ref>
Line 104:
==Pivot operations==
The geometrical operation of moving from a basic feasible solution to an adjacent basic feasible solution is implemented as a ''pivot operation''. First, a nonzero ''pivot element'' is selected in a nonbasic column. The row containing this element is [[Elementary matrix#Row-multiplying transformations|multiplied]] by its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in a row ''r'', then the column becomes the ''r''-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which corresponded to the ''r''-th column of the identity matrix before the operation. In effect, the variable corresponding to the pivot column enters the set of basic variables and is called the ''entering variable'', and the variable being replaced leaves the set of basic variables and is called the ''leaving variable''. The tableau is still in canonical form but with the set of basic variables changed by one element.<ref name="DantzigThapa1"/><ref name="NeringTucker"/>
==Algorithm==
Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations
===Entering variable selection===
Since the entering variable will, in general, increase from 0 to a positive number, the value of the objective function will decrease if the derivative of the objective function with respect to this variable is negative. Equivalently, the value of the objective function is
If there is more than one column so that the entry in the objective row is positive then the choice of which one to add to the set of basic variables is somewhat arbitrary and several ''entering variable choice rules''<ref name="Murty66">{{harvtxt|Murty|1983|p=66}}</ref> such as [[Devex algorithm]]<ref>Harris, Paula MJ. "Pivot selection methods of the Devex LP code." Mathematical programming 5.1 (1973): 1–28</ref> have been developed.
If all the entries in the objective row are less than or equal to 0 then no choice of entering variable can be made and the solution is in fact optimal. It is easily seen to be optimal since the objective row now corresponds to an equation of the form
:<math display="block">z(\mathbf{x})=z_B+\text{
===Leaving variable selection===
Once the pivot column has been selected, the choice of pivot row is largely determined by the requirement that the resulting solution be feasible. First, only positive entries in the pivot column are considered since this guarantees that the value of the entering variable will be nonnegative. If there are no positive entries in the pivot column then the entering variable can take any
Next, the pivot row must be selected so that all the other basic variables remain positive. A calculation shows that this occurs when the resulting value of the entering variable is at a minimum. In other words, if the pivot column is ''c'', then the pivot row ''r'' is chosen so that
:<math>b_r / a_{
is the minimum over all ''r'' so that ''a''<sub>''
===
{{
Consider the linear program
:Minimize
Line 151:
:<math>x=y=z=0,\,s=10,\,t=15.</math>
Columns 2, 3, and 4 can be selected as pivot columns, for this example column 4 is selected. The values of ''
:<math>
\begin{bmatrix}
1 & -\
0 & \
0 & \
\end{bmatrix}
</math>
Line 164:
For the next step, there are no positive entries in the objective row and in fact
:<math display="block">Z = -20 + \
so the minimum value of ''Z'' is −20.
==Finding an initial canonical tableau==
In general, a linear program will not be given in the canonical form and an equivalent canonical tableau must be found before the simplex algorithm can start. This can be accomplished by the introduction of ''artificial variables''. Columns of the identity matrix are added as column vectors for these variables. If the b value for a constraint equation is negative, the equation is negated before adding the identity matrix columns. This does not change the set of feasible solutions or the optimal solution, and it ensures that the slack variables will constitute an initial feasible solution. The new tableau is in canonical form but it is not equivalent to the original problem. So a new objective function, equal to the sum of the artificial variables, is introduced and the simplex algorithm is applied to find the minimum; the modified linear program is called the ''Phase I'' problem.<ref>{{harvtxt|Murty|1983|p=60}}</ref>
The simplex algorithm applied to the Phase I problem must terminate with a minimum value for the new objective function since, being the sum of nonnegative variables, its value is bounded below by 0. If the minimum is 0 then the artificial variables can be eliminated from the resulting canonical tableau producing a canonical tableau equivalent to the original problem. The simplex algorithm can then be applied to find the solution; this step is called ''Phase II''. If the minimum is positive then there is no feasible solution for the Phase I problem where the artificial variables are all zero. This implies that the feasible region for the original problem is empty, and so the original problem has no solution.<ref name="DantzigThapa1"/><ref name="NeringTucker"/><ref name="Padberg"/>
Line 184:
\end{align}</math>
It differs from the previous example by having equality instead of inequality constraints. The previous solution <math>x=y=0\, , z=5</math> violates the first constraint.
This new problem is represented by the (non-canonical) tableau
:<math>
\begin{bmatrix}
Line 203 ⟶ 204:
</math>
By construction, ''u'' and ''v'' are both basic variables since they are part of the initial identity matrix. However, the objective function ''W'' currently assumes that ''u'' and ''v'' are both 0. In order to adjust the objective function to be the correct value where ''u'' = 10 and ''v'' = 15, add the third and fourth rows to the first row giving
:<math>
\begin{bmatrix}
Line 218 ⟶ 219:
:<math>
\begin{bmatrix}
0 &
0 & 0 &
0 & 0 &
\end{bmatrix}
</math>
Line 228 ⟶ 229:
:<math>
\begin{bmatrix}
0 &
0 & 0 &
0 & 0 & 0 &
\end{bmatrix}
</math>
Line 238 ⟶ 239:
:<math>
\begin{bmatrix}
0 &
0 & 0 &
\end{bmatrix}
</math>
This is, fortuitously, already optimal and the optimum value for the original linear program is −130/7. This value is "worse" than -20 which is to be expected for a problem which is more constrained.
==Advanced topics==
Line 250 ⟶ 251:
===Implementation===
{{main|Revised simplex algorithm}}
The tableau form used above to describe the algorithm lends itself to an immediate implementation in which the tableau is maintained as a rectangular (''m'' + 1)-by-(''m'' + ''n'' + 1) array. It is straightforward to avoid storing the m explicit columns of the identity matrix that will occur within the tableau by virtue of '''B''' being a subset of the columns of ['''A''', '''I''']. This implementation is referred to as the "''standard'' simplex algorithm". The storage and computation overhead
In each simplex iteration, the only data required are the first row of the tableau, the (pivotal) column of the tableau corresponding to the entering variable and the right-hand-side. The latter can be updated using the pivotal column and the first row of the tableau can be updated using the (pivotal) row corresponding to the leaving variable. Both the pivotal column and pivotal row may be computed directly using the solutions of linear systems of equations involving the matrix '''B''' and a matrix-vector product using '''A'''. These observations motivate the "[[Revised simplex algorithm|''revised'' simplex algorithm]]", for which implementations are distinguished by their invertible representation of '''B'''.<ref name="DantzigThapa2" >{{cite book |first1=George B. |last1=Dantzig |authorlink=George B. Dantzig |first2=Mukund N. |last2=Thapa |year=2003 |title=Linear Programming 2: Theory and Extensions |publisher=Springer-Verlag }}</ref>
In large linear-programming problems '''A''' is typically a [[sparse matrix]] and, when the resulting sparsity of '''B''' is exploited when maintaining its invertible representation, the revised simplex algorithm is
===Degeneracy:
If the values of all basic variables are strictly positive, then a pivot must result in an improvement in the objective value. When this is always the case no set of basic variables occurs twice and the simplex algorithm must terminate after a finite number of steps. Basic feasible solutions where at least one of the ''basic ''variables is zero are called ''degenerate'' and may result in pivots for which there is no improvement in the objective value. In this case there is no actual change in the solution but only a change in the set of basic variables. When several such pivots occur in succession, there is no improvement; in large industrial applications, degeneracy is common and such "''stalling''" is notable.
Worse than stalling is the possibility the same set of basic variables occurs twice, in which case, the deterministic pivoting rules of the simplex algorithm will produce an infinite loop, or "cycle". While degeneracy is the rule in practice and stalling is common, cycling is rare in practice. A discussion of an example of practical cycling occurs in [[Manfred W. Padberg|Padberg]].<ref name="Padberg"/> [[Bland's rule]] prevents cycling and thus
{{cite journal
{{cite journal|title=New finite pivoting rules for the simplex method|first=Robert G.|last=Bland|journal=Mathematics of Operations Research|volume=2|issue=2|date=May 1977|pages=103–107|doi=10.1287/moor.2.2.103|jstor=3689647|mr=459599|ref=harv}}</ref><ref>{{harvtxt|Murty|1983|p=79}}</ref> Another pivoting algorithm, the [[criss-cross algorithm]] never cycles on linear programs.<ref>There are abstract optimization problems, called [[oriented matroid]] programs, on which Bland's rule cycles (incorrectly) while the [[criss-cross algorithm]] terminates correctly.</ref>▼
| title=New finite pivoting rules for the simplex method
| first1=Robert G. | last1=Bland | authorlink1=Robert G. Bland
| journal=[[Mathematics of Operations Research]]
| volume=2
| issue=2
| date=May 1977
| pages=103–107
| doi=10.1287/moor.2.2.103
| jstor=3689647
| mr=459599
▲
History-based pivot rules such as [[Zadeh's rule]] and [[Cunningham's rule]] also try to circumvent the issue of stalling and cycling by keeping track of how often particular variables are being used and then favor such variables that have been used least often.
===Efficiency in the worst case===
The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such as [[Fourier–Motzkin elimination]]. However, in 1972, [[Victor Klee|Klee]] and Minty<ref name="KleeMinty">{{cite book|title=Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin)|editor-first=Oved|editor-last=Shisha|publisher=Academic Press|___location=New York-London|year=1972|mr=332165|last1=Klee|first1=Victor|
| last1 = Hansen
| first1 = Thomas
| last2 = Zwick
| first2 = Uri
| title = Proceedings of the forty-seventh annual ACM symposium on Theory of Computing
| chapter = An Improved Version of the Random-Facet Pivoting Rule for the Simplex Algorithm
| author2-link = Uri Zwick
| pages = 209–218
| year = 2015
| doi = 10.1145/2746539.2746557
| citeseerx = 10.1.1.697.2526
| isbn = 9781450335362
| s2cid = 1980659
}}
</ref>
In 2014, it was proved{{citation-needed|date=January 2024}} that a particular variant of the simplex method is [[NP-mighty]], i.e., it can be used to solve, with polynomial overhead, any problem in NP implicitly during the algorithm's execution. Moreover, deciding whether a given variable ever enters the basis during the algorithm's execution on a given input, and determining the number of iterations needed for solving a given problem, are both [[NP-hardness|NP-hard]] problems.<ref>{{Cite journal|last1=Disser|first1=Yann|last2=Skutella|first2=Martin|date=2018-11-01|title=The Simplex Algorithm Is NP-Mighty|journal=ACM Trans. Algorithms|volume=15|issue=1|pages=5:1–5:19|doi=10.1145/3280847|issn=1549-6325|arxiv=1311.5935|s2cid=54445546}}</ref> At about the same time it was shown that there exists an artificial pivot rule for which computing its output is [[PSPACE-complete]].<ref>{{Citation | last1 = Adler | first1 = Ilan|author1-link=Ilan Adler | last2 = Christos | first2 = Papadimitriou | author2-link = Christos Papadimitriou | last3 = Rubinstein | first3 = Aviad | title = Integer Programming and Combinatorial Optimization | chapter = On Simplex Pivoting Rules and Complexity Theory | volume = 17 | pages = 13–24 | year = 2014 | arxiv = 1404.3320 | doi = 10.1007/978-3-319-07557-0_2| series = Lecture Notes in Computer Science | isbn = 978-3-319-07556-3 | s2cid = 891022 }}</ref> In 2015, this was strengthened to show that computing the output of Dantzig's pivot rule is [[PSPACE-complete]].<ref>{{Citation | last1 = Fearnly | first1 = John | last2 = Savani | first2 = Rahul | title = Proceedings of the forty-seventh annual ACM symposium on Theory of Computing | chapter = The Complexity of the Simplex Method | pages = 201–208 | year = 2015 | arxiv = 1404.0605 | doi = 10.1145/2746539.2746558| isbn = 9781450335362 | s2cid = 2116116 }}</ref>
===Efficiency in practice===
Analyzing and quantifying the observation that the simplex algorithm is efficient in practice despite its exponential worst-case complexity has led to the development of other measures of complexity. The simplex algorithm has polynomial-time [[Best, worst and average case|average-case complexity]] under various [[probability distribution]]s, with the precise average-case performance of the simplex algorithm depending on the choice of a probability distribution for the [[random matrix|random matrices]].<ref name="Schrijver">[[Alexander Schrijver]], ''Theory of Linear and Integer Programming''. John Wiley & sons, 1998, {{isbn|0-471-98232-6}} (mathematical)</ref><ref name="Borgwardt">The simplex algorithm takes on average ''D'' steps for a cube. {{harvtxt|Borgwardt|1987}}: {{cite book|last=Borgwardt|first=Karl-Heinz|title=The simplex method: A probabilistic analysis|series=Algorithms and Combinatorics (Study and Research Texts)|volume=1|publisher=Springer-Verlag|___location=Berlin|year=1987|pages=xii+268|isbn=978-3-540-17096-9|mr=868467}}</ref> Another approach to studying "[[porous set|typical phenomena]]" uses [[Baire category theory]] from [[general topology]], and to show that (topologically) "most" matrices can be solved by the simplex algorithm in a polynomial number of steps.{{Citation needed|date=June 2019}}
▲The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such as [[Fourier–Motzkin elimination]]. However, in 1972, Klee and Minty<ref name="KleeMinty">{{cite book|title=Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin)|editor-first=Oved|editor-last=Shisha|publisher=Academic Press|___location=New York-London|year=1972|mr=332165|last1=Klee|first1=Victor|authorlink1=Victor Klee|last2=Minty|first2= George J.|authorlink2=George J. Minty|chapter=How good is the simplex algorithm?|pages=159–175|ref=harv}}</ref> gave an example showing that the worst-case complexity of simplex method as formulated by Dantzig is [[exponential time]]. Since then, for almost every variation on the method, it has been shown that there is a family of linear programs for which it performs badly. It is an open question if there is a variation with [[polynomial time]], or even sub-exponential worst-case complexity.<ref name="PapSte">[[Christos H. Papadimitriou]] and Kenneth Steiglitz, ''Combinatorial Optimization: Algorithms and Complexity'', Corrected republication with a new preface, Dover. (computer science)</ref><ref name="Schrijver" >[[Alexander Schrijver]], ''Theory of Linear and Integer Programming''. John Wiley & sons, 1998, ISBN 0-471-98232-6 (mathematical)</ref>
==Other algorithms==
Other algorithms for solving linear-programming problems are described in the [[linear programming|linear-programming]] article. Another basis-exchange pivoting algorithm is the [[criss-cross algorithm]].<ref>{{cite journal|last1=Terlaky|first1=Tamás|last2=Zhang|first2=Shu Zhong|title=Pivot rules for linear programming: A Survey on recent theoretical developments|issue=1|journal=Annals of Operations Research|volume=46–47|year=1993|pages=203–233|doi=10.1007/BF02096264|mr=1260019|
==Linear-fractional programming==
{{Main|Linear-fractional programming}}
[[Linear-fractional programming|Linear–fractional programming]] (LFP) is a generalization of [[linear programming]] (LP).
</ref><ref>{{cite journal|last1=Mathis|first1=Frank H.|last2=Mathis|first2=Lenora Jane|title=A nonlinear programming algorithm for hospital management|journal=[[SIAM Review]]|volume=37 |year=1995 |issue=2 |pages=230–234|mr=1343214|jstor=2132826|doi=10.1137/1037046|s2cid=120626738 }}
</ref>
pages=198–214|year=1999|issn=0377-2217|doi=10.1016/S0377-2217(98)00049-6
==
{{div col}}
* [[Criss-cross algorithm]]
* [[Cutting-plane method]]
* [[Devex algorithm]]
* [[Fourier–Motzkin elimination]]
* [[Gradient descent]]
* [[Karmarkar's algorithm]]
* [[Nelder–Mead method|Nelder–Mead simplicial heuristic]]
* [[Loss Functions]] - a type of Objective Function
▲* [[Bland's rule|Pivoting rule of Bland, which avoids cycling]]
{{colend}}
==Notes==
{{reflist|2}}
==References==
* {{cite book|last=Murty|first=Katta G.|
==
These introductions are written for students of [[computer science]] and [[operations research]]:
* [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]], and [[Clifford Stein]]. ''Introduction to Algorithms'', Second Edition. MIT Press and McGraw-Hill, 2001.
* Frederick S. Hillier and Gerald J. Lieberman: ''Introduction to Operations Research'', 8th edition. McGraw-Hill.
* {{cite book|title=Optimization in operations research|first=Ronald L.|last=Rardin|year=1997|publisher=Prentice Hall|pages=919|isbn=978-0-02-398415-
==External links==
{{wikibooks|Operations Research|The Simplex Method}}
* [http://www.isye.gatech.edu/~spyros/LP/LP.html An Introduction to Linear Programming and the Simplex Algorithm] by Spyros Reveliotis of the Georgia Institute of Technology.
* Greenberg, Harvey J., ''
*
* [https://www.mathstools.com/section/main/simplex_online_calculator Mathstools] Simplex Calculator from www.mathstools.com
* [http://
* [http://www.phpsimplex.com/simplex/simplex.htm?l=en PHPSimplex: online tool to solve Linear Programming Problems] by Daniel Izquierdo and Juan José Ruiz of the University of Málaga (UMA, Spain)
* [https://web.archive.org/web/20180513100217/http://simplex-m.com/ simplex-m] Online Simplex Solver
{{Optimization algorithms|convex}}
{{Mathematical programming}}
{{Authority control}}
{{DEFAULTSORT:Simplex Algorithm}}
[[Category:Optimization algorithms and methods]]
[[Category:
[[Category:Exchange algorithms]]
[[Category:Linear programming]]
[[Category:Computer-related introductions in 1947]]
|