Odds algorithm

This is an old revision of this page, as edited by 129.240.72.95 (talk) at 09:59, 24 March 2010 (Odds-theorem: corrected value of 1/e.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The odds-algorithm is a mathematical method to compute optimal strategies for a class of problems which belong to the ___domain of optimal stopping. Its solution determines the odds-strategy, and the importance of the odds-strategy lies in its optimality, as explained below.

The odds-algorithm applies to a class of problems called last-success- problems. Formally the objective is to maximize the probability of identifying in a sequence of sequentially observed independent events a last specific event. This identification must be done on-line, that is, at the time of observation. No recall on preceding observations is permitted. Usually a specific event is defined by the decision maker as an event which is of true interest to her/him in the view of "stopping" in order to take a well-defined action. Such problems are encountered in several real-world situations.

Examples

Two different situations exemplify the interest in maximizing the probability to stop on a last specific event.

  1. Suppose a car is advertised for sale (best offer). n people respond and ask to see the car. They all insist that they would need the immediate decision whether their offer is accepted or not. Define an offer interesting, coded 1 say, if it is better than all preceding offers, and coded 0 otherwise. The offers will form a random sequence of 0's and 1's. Only 1's are of interest to the seller, and with each 1 he/she may fear that there will be no further 1's. It follows from the definition that the very last 1 is the highest offer. Maximizing the probability of selling on the last 1 therefore means maximizing the probability of selling best.
  2. A physician, using a special treatment, may use the code 1 for a successful treatment, 0 otherwise. Treating a sequence of n patients with the same treatment he/she wants to minimize unnecessary sufferings, and, at the same time, obtain all successes in the sequence of patients. Stopping on the last 1 in such a random sequence of 0's and 1's means to realize this objective. Since the physician has no prophetical power, his/her objective translates into the goal of finding a strategy maximizing the probability of stopping on the last 1.

Definitions

Consider a sequence of n independent events. Associate with this sequence another sequence   with values 1 or 0. Here   stands for the event that the kth observation is interesting (as defined by the decision maker), and   for non-interesting. Let   be the probability that the kth event is interesting. Further let   and  . Note that   represents the odds of the kth event turning out to be interesting, explaining the name of the odds-algorithm.

Algorithmic procedure of the odds-algorithm

The odds-algorithm sums up the odds in reverse order

 

until this sum reaches or exceeds the value 1 for the first time. If this happens at index s, it saves s and the corresponding sum

 

If the sum of the odds does not reach 1, it sets s = 1. At the same time it computes

 

The output is

  1.  , the stopping threshold
  2.  , the win probability.

Odds-strategy

The odds-strategy is the rule to observe the events one after the other and to stop on the first interesting event from index s onwards (if any), where s is the stopping threshold of output a).

The importance of the odds-strategy, and hence of the odds-algorithm, lies in the following odds-theorem.

Odds-theorem

The odds-theorem states that

  1. The odds-strategy is optimal, that is, it maximizes the probability of stopping on the last 1.
  2. The win probability of the odds-strategy equals  
  3. If  , the win probability   is always at least  , and this lower bound is best possible.

Features of the odds-algorithm

The odds-algorithm computes the optimal strategy and the optimal win probability at the same time. Also, the number of operations of the odds-algorithm is (sub)linear in n. Hence no quicker algorithm can possibly exist for all sequences, so that the odds-algorithm is, at the same time, optimal as an algorithm.

Source

The odds algorithm is due to F. T. Bruss (2000) who coined this name. It is also known under the name Bruss-algorithm (strategy). Free implementations can be found on the web.

Applications

Applications reach from medical questions in clinical trials over sales problems, secretary problems, portfolio selection, (one-way) search strategies, trajectory problems and the parking problem to problems in on-line maintenance and others.

There exists, in the same spirit, an Odds-Theorem for continuous-time arrival processes with independent increments such as the Poisson process (Bruss (2000)). In some cases, the odds are not necessarily known in advance (as in Example 2 above) so that the application of the odds-algorithm is not directly possible. In this case each step can use sequential estimates of the odds. This is meaningful, if the number of unknown parameters is not large compared with the number n of observations. The question of optimality is then more complicated, however, and requires additional studies. Generalizations of the odds-algorithm allow for different rewards for failing to stop and wrong stops as well as replacing independence assumptions by weaker ones (Ferguson (2008))

See also

References