Content deleted Content added
→Efficiency: removed the repeating brackets |
→Efficiency: removed the repeating word "that" |
||
Line 44:
Unfortunately, in most applications, the conditional probability of failure is not easy to compute efficiently. There are two standard and related techniques for dealing with this:
* '''Using a conditional expectation:''' Many probabilistic proofs work as follows: they implicitly define a random variable ''Q'', show that (i) the expectation of ''Q'' is at most (or at least) some threshold value, and (ii) in any outcome where ''Q'' is at most (at least) this threshold, the outcome is a success. Then (i) implies that there exists an outcome where ''Q'' is at most the threshold, and this and (ii) imply that there is
In this case, to keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of ''Q'' below (or above) the threshold. To do this, instead of computing the conditional probability of failure, the algorithm computes the conditional expectation of ''Q'' and proceeds accordingly: at each interior node, there is some child whose conditional expectation is at most (at least) the node's conditional expectation; the algorithm moves from the current node to such a child, thus keeping the conditional expectation below (above) the threshold.
|