Method of conditional probabilities: Difference between revisions

Content deleted Content added
Efficiency: removed the repeating brackets
Efficiency: removed the repeating word "that"
Line 44:
Unfortunately, in most applications, the conditional probability of failure is not easy to compute efficiently. There are two standard and related techniques for dealing with this:
 
* '''Using a conditional expectation:''' Many probabilistic proofs work as follows: they implicitly define a random variable ''Q'', show that (i) the expectation of ''Q'' is at most (or at least) some threshold value, and (ii) in any outcome where ''Q'' is at most (at least) this threshold, the outcome is a success. Then (i) implies that there exists an outcome where ''Q'' is at most the threshold, and this and (ii) imply that there is ana successful outcome that is a success. (In the example above, ''Q'' is the number of tails, which should be at least the threshold 1.5. In many applications, ''Q'' is the number of "bad" events (not necessarily disjoint) that occur in a given outcome, where each bad event corresponds to one way the experiment can fail, and the expected number of bad events that occur is less than 1.)
 
In this case, to keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of ''Q'' below (or above) the threshold. To do this, instead of computing the conditional probability of failure, the algorithm computes the conditional expectation of ''Q'' and proceeds accordingly: at each interior node, there is some child whose conditional expectation is at most (at least) the node's conditional expectation; the algorithm moves from the current node to such a child, thus keeping the conditional expectation below (above) the threshold.