Method of conditional probabilities: Difference between revisions

Content deleted Content added
m Further reading: clean up, replaced: explaind → explained
 
(2 intermediate revisions by 2 users not shown)
Line 20:
Often, the [[probabilistic method]] is used to prove the existence of mathematical objects with some desired combinatorial properties. The proofs in that method work by showing that a random object, chosen from some probability distribution, has the desired properties with positive probability. Consequently, they are [[nonconstructive proof|nonconstructive]] — they don't explicitly describe an efficient method for computing the desired objects.
 
The method foof conditional probabilities converts such a proof, in a "very precise sense", into an efficient [[deterministic algorithm]], one that is guaranteed to compute an object with the desired properties. That is, the method [[derandomization|derandomizes]] the proof. The basic idea is to replace each random choice in a random experiment by a deterministic choice, so as to keep the conditional probability of failure, given the choices so far, below 1.
 
The method is particularly relevant in the context of [[randomized rounding]] (which uses the probabilistic method to design [[approximation algorithm]]s).
Line 91:
In this case, the conditional probability of failure is not easy to calculate. Indeed, the original proof did not calculate the probability of failure directly; instead, the proof worked by showing that the expected number of cut edges was at least |''E''|/2.
 
Let random variable ''Q'' be the number of edges cut. To keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of ''Q'' at or above the threshold |''E''|/2. (This is because, as long as the conditional expectation of ''Q'' is at least |''E''|/2, there must be some still-reachable outcome where ''Q'' is at least |''E''|/2, so the conditional probability of reaching such an outcome is positive.) To keep the conditional expectation of ''Q'' at |''E''|/2 or above, the algorithm will, at each step, color the vertex under consideration so as to ''maximize'' the resulting conditional expectation of ''Q''. This suffices, because there must be some child whose conditional expectation is at least the current state's conditional expectation (and thus at least |''E''|/2).
 
Given that some of the vertices are colored already, what is this conditional expectation? Following the logic of the original proof, the conditional expectation of the number of cut edges is
Line 107:
4. Otherwise, color ''u'' black.
 
Because of its derivation, this deterministic algorithm is guaranteed to cut at least half the edges of the given graph. (This makes it a [[Maximum cut#Approximation algorithms|0.5-approximation algorithm for Max-cut]].)
 
== Example using pessimistic estimators ==
Line 128:
Clearly the process computes an independent set. Any vertex ''u'' that is considered before all of its neighbors will be added to ''S''. Thus, letting ''d''(''u'') denote the degree of ''u'', the probability that ''u'' is added to ''S'' is at least 1/(''d''(''u'')+1). By [[Expected value#Linearity|linearity of expectation]], the expected size of ''S'' is at least
 
: <math>\sum_{u\in V} \frac{1}{d(u)+1} ~\ge~ \frac{|V|}{D+1}.</math>
 
(The inequality above follows because 1/(''x''+1) is [[Convex function|convex]] in ''x'', so the left-hand side is minimized, subject to the sum of the degrees being fixed at 2|''E''|, when each ''d''(''u'') = ''D'' = 2|''E''|/|''V''|.) ''QED''
Line 209:
 
== Further reading ==
The method of conditional rounding is explaindexplained in several textbooks:
 
* {{Cite book |first1=Noga |last1= Alon |authorlink1=Noga Alon