Content deleted Content added
→Multiple choice problem: clearer language |
format as Template:wikicite |
||
(9 intermediate revisions by 6 users not shown) | |||
Line 8:
Two different situations exemplify the interest in maximizing the probability to stop on a last specific event.
# Suppose a car is advertised for sale to the highest bidder (best "offer"). Let <math>n</math> potential buyers respond and ask to see the car. Each insists upon an immediate decision from the seller to accept the bid, or not. Define a bid as ''interesting'', and coded 1 if it is better than all preceding bids, and coded 0 otherwise. The bids will form a [[random sequence]] of 0s and 1s. Only 1s interest the seller, who may fear that each successive 1 might be the last. It follows from the definition that the very last 1 is the highest bid. Maximizing the probability of selling on the last 1 therefore means maximizing the probability of selling ''best''.
# A physician, using a special treatment, may use the code 1 for a successful treatment, 0 otherwise. The physician treats a sequence of <math>n</math> patients the same way, and wants to minimize any suffering, and to treat every responsive patient in the sequence. Stopping on the last 1 in such a random sequence of 0s and 1s would achieve this objective. Since the physician is no prophet, the objective is to maximize the probability of stopping on the last 1. (See [[Compassionate use]].)
== Definitions ==
Line 58:
==Applications==
{{secretary_problem.svg}}
Applications reach from medical questions in [[clinical trial]]s over sales problems, [[secretary problems]], [[portfolio (finance)|portfolio]] selection, (one way) search strategies, trajectory problems and the [[parking problem]] to problems in online maintenance and others.
There exists, in the same spirit, an Odds Theorem for continuous-time arrival processes with [[independent increments]] such as the [[Poisson process]] ({{harvnb|Bruss|2000}}). In some cases, the odds are not necessarily known in advance (as in Example 2 above) so that the application of the odds algorithm is not directly possible. In this case each step can use [[sequential estimate]]s of the odds. This is meaningful, if the number of unknown parameters is not large compared with the number n of observations. The question of optimality is then more complicated, however, and requires additional studies. Generalizations of the odds algorithm allow for different rewards for failing to stop
and wrong stops as well as replacing independence assumptions by weaker ones
== Variations ==
Line 82 ⟶ 83:
When <math>r=2 </math>, {{harvnb|Ano|Kakinuma|Miyoshi|2010}} showed that the tight lower bound of win probability is equal to <math> e^{-1}+ e^{-\frac{3}{2}}. </math>
For general positive integer <math>r</math>, {{harvnb|Matsui|Ano|2016}}
When <math> r=3,4,5 </math>, tight lower bounds of win probabilities are equal to <math> e^{-1}+ e^{-\frac{3}{2}}+e^{-\frac{47}{24}} </math>, <math> e^{-1}+e^{-\frac{3}{2}}+e^{-\frac{47}{24}}+e^{-\frac{2761}{1152}} </math> and <math> e^{-1}+e^{-\frac{3}{2}}+e^{-\frac{47}{24}}+e^{-\frac{2761}{1152}}+e^{-\frac{4162637}{1474560}}, </math> respectively.
For further numerical cases
== See also ==
Line 94 ⟶ 97:
== References ==
*{{cite journal |last1=Ano |first1=K.|first2=H. |last2=Kakinuma |first3=N. |last3=Miyoshi |title=Odds theorem with multiple selection chances |journal=Journal of Applied Probability |volume=47 |year=2010 |issue=4|pages=1093–1104 |doi=10.1239/jap/1294170522|s2cid=17598431|doi-access=free }}
* {{cite journal | last=Bruss | first=F. Thomas | authorlink=Franz Thomas Bruss | title=Sum the odds to one and stop | journal=The Annals of Probability | publisher=Institute of Mathematical Statistics | volume=28 | issue=3 | year=2000 | issn=0091-1798 | doi=10.1214/aop/1019160340 | pages=1384–1391 | doi-access=free }}
*
*
*{{wikicite |ref={{SfnRef|Ferguson|2008}} |reference=[[Thomas S. Ferguson|T. S. Ferguson]]: (2008, unpublished)}}{{sfn whitelist|CITEREFFerguson2008}}
*{{cite journal |first1=F. T. |last1=Bruss |first2=D. |last2=Paindaveine |title=Selecting a sequence of last successes in independent trials |journal=Journal of Applied Probability |volume=37 |pages=389–399 |year=2000 |issue=2 |doi=10.1239/jap/1014842544 |url=https://mpra.ub.uni-muenchen.de/21166/1/MPRA_paper_21166.pdf}}
*{{Cite journal|last1=Gilbert |first1=J |last2=Mosteller |first2=F |title= Recognizing the Maximum of a Sequence |journal=Journal of the American Statistical Association |volume=61 |pages=35–73 |year=1966 |issue=313 |doi=10.2307/2283044 |jstor=2283044 }}
*{{cite journal |last1=Matsui |first1=T |last2=Ano | first2=K |title=A note on a lower bound for the multiplicative odds theorem of optimal stopping | journal=Journal of Applied Probability |volume=51 |pages=885–889 |year=2014 |issue=3 |doi=10.1239/jap/1409932681 |doi-access=free }}
*{{cite journal |last1=Matsui |first1=T |last2=Ano | first2=K |title=Lower bounds for Bruss' odds problem with multiple stoppings |journal=[[Mathematics of Operations Research]] |volume=41 |pages=700–714 |year=2016 |issue=2 |doi=10.1287/moor.2015.0748 |arxiv=1204.5537 |s2cid=31778896 }}
*{{cite journal |last1=Matsui |first1=T |last2=Ano | first2=K |title=Compare the ratio of symmetric polynomials of odds to one and stop |journal=Journal of Applied Probability |volume=54 |pages=12–22 |year=2017 |doi=10.1017/jpr.2016.83 |s2cid=41639968 |url=http://t2r2.star.titech.ac.jp/cgi-bin/publicationinfo.cgi?q_publication_content_number=CTT100773751 }}
* Shoo-Ren Hsiao and Jiing-Ru. Yang: "[https://www.cambridge.org/core/journals/journal-of-applied-probability/article/selecting-the-last-success-in-markovdependent-trials/09D192C389B4BA5D4E2CA678E11B31CC Selecting the Last Success in Markov-Dependent Trials]", ''[[Journal of Applied Probability]]'', Vol. 93, 271–281, (2002).
|