Fundamental theorem of linear programming: Difference between revisions

Content deleted Content added
Line 9:
 
==Proof==
Suppose, for the sake of contradiction, that <math>x^\ast \in \mathrm{int}(P)</math>,. thenThen there exists some <math>\epsilon > 0</math> such that the ball of radius <math>\epsilon</math> centered at <math>x^\ast</math> is contained in <math>P</math>, that is <math>B_{\epsilon}(x^\ast) \subset P</math>. Therefore,
 
:<math>x^\ast - \frac{\epsilon}{2} \frac{c}{||c||} \in P</math> and
 
:<math>c^T\left( x^\ast - \frac{\epsilon}{2} \frac{c}{||c||}\right) = c^T x^\ast - \frac{\epsilon}{2} \frac{c^T c}{||c||} = c^T x^\ast - \frac{\epsilon}{2} ||c|| < c^T x^\ast.</math>
 
Hence <math>x^\ast</math> is not an optimal solution, a contradiction. Therefore, <math>x^\ast</math> must live on the boundary of <math>P</math>. If <math>x^\ast</math> is not a vertex itself, it must be the convex combination of vertices of <math>P</math>., say That<math>x_1, is..., thatx_t</math>. Then <math>x^\ast = \sum_{i=1}^t \lambda_i x_i</math> with <math>\lambda_i >\geq 0</math> and <math>\sum_{i=1}^t \lambda_i = 1</math>. ThenObserve we mustthat have
And hence we have a contradiction because <math>x^\ast</math> is not an optimal solution.
 
:<math>0=c^{T}\left(x^{\ast}-\left(\sum_{i=1}^{t}\lambda_{i1}x_{i}\right)-x^{\ast}\right)=c^{T}\left(\sum_{i=1}^{t}\lambda_{i}(x_{i}-x^{\ast}-x_{i})\right)=\sum_{i=1}^{t}\lambda_{i}(c^{T}x^x_{\asti}-c^{T}x_x^{i\ast}).</math>
Therefore, <math>x^\ast</math> must live on the boundary of <math>P</math>. If <math>x^\ast</math> is not a vertex itself, it must be the convex combination of vertices of <math>P</math>. That is that <math>x^\ast = \sum_{i=1}^t \lambda_i x_i</math> with <math>\lambda_i > 0</math> and <math>\sum_{i=1}^t \lambda_i = 1</math>. Then we must have
 
Since <math>x^{\ast}</math> is an optimal solution, all terms in the sum are nonnegative. andSince the sum is equal to zero, we must have that each individual term is equal to zero. Hence, <math>c^{T}x^{\ast}=c^{T}x_{i}</math> for each <math>x_i</math>, so every <math>x_i</math> is also optimal, and therefore all points on the face whose vertices are <math>x_1, ..., x_t</math>, are all optimal solutions.
:<math>0=c^{T}\left(x^{\ast}-\left(\sum_{i=1}^{t}\lambda_{i}x_{i}\right)\right)=c^{T}\left(\sum_{i=1}^{t}\lambda_{i}(x^{\ast}-x_{i})\right)=\sum_{i=1}^{t}\lambda_{i}(c^{T}x^{\ast}-c^{T}x_{i})</math>
 
Since all terms in the sum are nonnegative and the sum is equal to zero, we must have that each individual term is equal to zero. Hence, every <math>x_i</math> is also optimal, and therefore all points on the face whose vertices are <math>x_1, ..., x_t</math>, are all optimal solutions.
 
==References==