Content deleted Content added
Erel Segal (talk | contribs) No edit summary |
Polygnotus (talk | contribs) m →Further reading: clean up, replaced: explaind → explained |
||
(5 intermediate revisions by 2 users not shown) | |||
Line 1:
In [[mathematics]] and [[computer science]], the '''method of conditional probabilities'''<ref name='spencer'>{{
| title=Ten lectures on the probabilistic method▼
| last=Spencer|first=Joel H.|authorlink=Joel Spencer▼
| url=https://books.google.com/books?id=Kz0B0KkwfVAC▼
| year=1987▼
| publisher=SIAM▼
| isbn=978-0-89871-325-1}}</ref><ref name='raghavan'>▼
| title= Probabilistic construction of deterministic algorithms: approximating packing integer programs▼
| first = Prabhakar | last = Raghavan | authorlink=Prabhakar Raghavan▼
| journal=[[Journal of Computer and System Sciences]]▼
| volume=37▼
| issue=2▼
| pages=130–143▼
| year = 1988▼
| doi = 10.1016/0022-0000(88)90003-7| doi-access=free▼
</ref> is a systematic method for converting [[non-constructive]] probabilistic existence proofs into efficient [[Deterministic algorithm|deterministic algorithms]] that explicitly construct the desired object.<ref>[http://algnotes.info/on/background/probabilistic-method/method-of-conditional-probabilities/ The probabilistic method — method of conditional probabilities], blog entry by Neal E. Young, accessed 19/04/2012 and 14/09/2023.</ref>
The method
The method is particularly relevant in the context of [[randomized rounding]] (which uses the probabilistic method to design [[approximation algorithm]]s).
Line 10 ⟶ 27:
== Overview ==
: ''We first show the existence of a provably good approximate solution using the [[probabilistic method]]... [We then] show that the probabilistic existence proof can be converted, in a very precise sense, into a deterministic approximation algorithm.''
[[File:Method of conditional probabilities.png|thumb|450px|right|The method of conditional probabilities]]
Line 46 ⟶ 63:
Unfortunately, in most applications, the conditional probability of failure is not easy to compute efficiently. There are two standard and related techniques for dealing with this:
=== Using a conditional expectation ===
In this case, to keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of ''Q'' below (or above) the threshold. To do this, instead of computing the conditional probability of failure, the algorithm computes the conditional expectation of ''Q'' and proceeds accordingly: at each interior node, there is some child whose conditional expectation is at most (at least) the node's conditional expectation; the algorithm moves from the current node to such a child, thus keeping the conditional expectation below (above) the threshold.
=== Using a pessimistic estimator ===
== Example using conditional expectations ==
Line 72 ⟶ 91:
In this case, the conditional probability of failure is not easy to calculate. Indeed, the original proof did not calculate the probability of failure directly; instead, the proof worked by showing that the expected number of cut edges was at least |''E''|/2.
Let random variable ''Q'' be the number of edges cut. To keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of ''Q'' at or above the threshold |''E''|/2.
Given that some of the vertices are colored already, what is this conditional expectation? Following the logic of the original proof, the conditional expectation of the number of cut edges is
Line 88 ⟶ 107:
4. Otherwise, color ''u'' black.
Because of its derivation, this deterministic algorithm is guaranteed to cut at least half the edges of the given graph.
== Example using pessimistic estimators ==
Line 109 ⟶ 128:
Clearly the process computes an independent set. Any vertex ''u'' that is considered before all of its neighbors will be added to ''S''. Thus, letting ''d''(''u'') denote the degree of ''u'', the probability that ''u'' is added to ''S'' is at least 1/(''d''(''u'')+1). By [[Expected value#Linearity|linearity of expectation]], the expected size of ''S'' is at least
: <math>\sum_{u\in V} \frac{1}{d(u)+1} ~\ge~ \frac{|V|}{D+1}.</math>
(The inequality above follows because 1/(''x''+1) is [[Convex function|convex]] in ''x'', so the left-hand side is minimized, subject to the sum of the degrees being fixed at 2|''E''|, when each ''d''(''u'') = ''D'' = 2|''E''|/|''V''|.) ''QED''
Line 187 ⟶ 206:
== References ==
{{reflist}}
▲* {{Citation
▲| title=Ten lectures on the probabilistic method
▲| last=Spencer|first=Joel H.|authorlink=Joel Spencer
▲| url=https://books.google.com/books?id=Kz0B0KkwfVAC
▲| year=1987
▲| publisher=SIAM
▲| isbn=978-0-89871-325-1}}
▲| title= Probabilistic construction of deterministic algorithms: approximating packing integer programs
▲| first = Prabhakar | last = Raghavan | authorlink=Prabhakar Raghavan
▲| journal=[[Journal of Computer and System Sciences]]
▲| volume=37
▲| issue=2
▲| pages=130–143
▲| year = 1988
▲| doi = 10.1016/0022-0000(88)90003-7| doi-access=free
▲}}.
== Further reading ==
The method of conditional rounding is explained in several textbooks:
* {{Cite book |first1=Noga |last1= Alon |authorlink1=Noga Alon
| first2=Joel |last2=Spencer |authorlink2=Joel Spencer
| series=Wiley-Interscience Series in Discrete Mathematics and Optimization
Line 226 ⟶ 225:
* {{Cite book
| first1=Rajeev |last1=Motwani |authorlink1=Rajeev Motwani
| first2=Prabhakar |last2=Raghavan |authorlink2=Prabhakar Raghavan
Line 236 ⟶ 234:
* {{Citation
| first=Vijay |last=Vazirani
| authorlink=Vijay Vazirani
|