Method of conditional probabilities: Difference between revisions

Content deleted Content added
Cleaned up using AutoEd
latex to html (finished)
Line 58:
Given any undirected [[Graph (mathematics)|graph]] ''G'' = (''V'', ''E''), the [[Max cut]] problem is to color each vertex of the graph with one of two colors (say black or white) so as to maximize the number of edges whose endpoints have different colors. (Say such an edge is ''cut''.)
 
'''Max-Cut Lemma:''' ''In any graph ''G'' = (''V'', ''E''), at least |''E''|/2 edges can be cut.''
 
<blockquote>'''Probabilistic proof.''' Color each vertex black or white by flipping a fair coin. By calculation, for any edge e in ''E'', the probability that it is cut is 1/2. Thus, by [[Expected value#Linearity|linearity of expectation]], the expected number of edges cut is |''E''|/2. Thus, there exists a coloring that cuts at least |''E''|/2 edges. ''QED''</blockquote>
=== Probabilistic proof of Max-Cut lemma ===
 
Color each vertex black or white by flipping a fair coin. By calculation, for any edge e in ''E'', the probability that it is cut is 1/2. Thus, by [[Expected value#Linearity|linearity of expectation]], the expected number of edges cut is |''E''|/2. Thus, there exists a coloring that cuts at least |''E''|/2 edges. ''QED''
 
=== The method of conditional probabilities with conditional expectations ===
Line 107 ⟶ 105:
3. If no neighbors of ''u'' are in ''S'', add ''u'' to ''S''
4. Return ''S''.
Clearly the process computes an independent set. Any vertex ''u'' that is considered before all of its neighbors will be added to ''S''. Thus, letting ''d''(''u'') denote the degree of ''u'', the probability that ''u'' is added to ''S'' is at least <math>1/(''d''(''u'')+1)</math>. By [[Expected value#Linearity|linearity of expectation]], the expected size of ''S'' is at least
 
: <math>\sum_{u\in V} \frac{1}{d(u)+1} ~\ge~\frac{|V|}{D+1}.</math>
Line 119 ⟶ 117:
We will replace each random step by a deterministic step that keeps the conditional expectation of ''Q'' at or above |''V''|/(''D''+1). This will ensure a successful outcome, that is, one in which the independent set ''S'' has size at least |''V''|/(''D''+1), realizing the bound in Turan's theorem.
 
Given that the first t steps have been taken, let ''S''<mathsup>S^{(''t'')}</mathsup> denote the vertices added so far. Let ''R''<mathsup>R^{(''t'')}</mathsup> denote those vertices that have not yet been considered, and that have no neighbors in ''S''<mathsup>S^{(''t'')}</mathsup>. Given the first t steps, following the reasoning in the original proof, any given vertex ''w'' in ''R''<mathsup>R^{(''t'')}</mathsup> has conditional probability at least <math>1/(''d''(''w'')+1)</math> of being added to ''S'', so the conditional expectation of ''Q'' is ''at least''
 
: <math>|S^{(t)}| ~+~ \sum_{w\in R^{(t)}} \frac{1}{d(w)+1}. </math>
 
Let ''Q''<mathsup>Q^{(''t'')}</mathsup> denote the above quantity, which is called a '''pessimistic estimator''' for the conditional expectation.
 
The proof showed that the pessimistic estimator is initially at least |''V''|/(''D''+1). (That is, ''Q''<mathsup>Q^{(0)}</mathsup>> ≥ |''V''|/(''D''+1).) The algorithm will make each choice to keep the pessimistic estimator from decreasing, that is, so that ''Q''<mathsup>Q^{(''t''+1)}</mathsup> ≥ ''Q''<mathsup>Q^{(''t'')}</mathsup> for each ''t''. Since the pessimistic estimator is a lower bound on the conditional expectation, this will ensure that the conditional expectation stays above |''V''|/(''D''+1), which in turn will ensure that the conditional probability of failure stays below 1.
 
Let ''u'' be the vertex considered by the algorithm in the next ((''t''+1)-st) step.
 
If ''u'' already has a neighbor in ''S'', then ''u'' is not added to ''S'' and (by inspection of ''Q''<mathsup>Q^{(''t'')}</mathsup>), the pessimistic estimator is unchanged. If ''u'' does ''not'' have a neighbor in ''S'', then ''u'' is added to ''S''.
 
By calculation, if ''u'' is chosen randomly from the remaining vertices, the expected increase in the pessimistic estimator is non-negative. [The calculation: Conditioned on choosing a vertex in ''R''<mathsup>R^{(''t'')}</mathsup>, the probability that a given term <math>1/(''d''(''w'')+1)</math> is dropped from the sum in the pessimistic estimator is at most <math>(''d''(''w'')+1)/|''R^{''<sup>(''t'')}|</mathsup>|, so the expected decrease in each term in the sum is at most <math>1/|R^{''Q''<sup>(''t'')}|</mathsup>|. There are ''R''<mathsup>|R^{(''t'')}|</mathsup> terms in the sum. Thus, the expected decrease in the sum is at most 1. Meanwhile, the size of ''S'' increases by 1.]
 
Thus, there must exist some choice of ''u'' that keeps the pessimistic estimator from decreasing.
Line 139 ⟶ 137:
The algorithm below chooses each vertex ''u'' to maximize the resulting pessimistic estimator. By the previous considerations, this keeps the pessimistic estimator from decreasing and guarantees a successful outcome.
 
Below, ''N''<mathsup>N^{(''t)}(u'')</mathsup>(''u'') denotes the neighbors of ''u'' in ''R''<mathsup>R^{(''t'')}</mathsup>(that is, neighbors of ''u'' that are neither in ''S'' nor have a neighbor in ''S'').
1. Initialize ''S'' to be the empty set.
2. While there exists a not-yet-considered vertex ''u'' with no neighbor in ''S'':
Line 157 ⟶ 155:
2. While the remaining graph is not empty:
3. Add a vertex ''u'' to ''S'', where ''u'' has minimum degree in the ''remaining'' graph.
4. Delete ''u'' and all of <math>u</math>'sits neighbors from the graph.
5. Return ''S''.
 
Line 164 ⟶ 162:
: <math>1 - \sum_{w\in N^{(t)}(u)\cup\{u\}} \frac{1}{d(w)+1},</math>
 
where ''N''<mathsup>N^{(''t)}(u'')</mathsup>(''u'') denotes the neighbors of ''u'' in the remaining graph (that is, in ''R''<mathsup>R^{(''t'')}</mathsup>).
 
For the first algorithm, the net increase is non-negative because, by the choice of ''u'',
Line 176 ⟶ 174:
: <math>\sum_{w\in N^{(t)}(u)\cup\{u\}} \frac{1}{d(w)+1} \le (d'(u)+1) \frac{1}{d'(u)+1} = 1 </math>,
 
where <math>d''d′''(''u'')</math> is the degree of ''u'' in the remaining graph.
 
== See also ==