Content deleted Content added
m {{reflist}}. still requires cleanup to refer to the actual citations |
Altered title. Added journal. | Use this tool. Report bugs. | #UCB_Gadget |
||
(73 intermediate revisions by 55 users not shown) | |||
Line 1:
{{short description|Algorithm that can return a valid solution to a problem even if interrupted}}
In [[computer science]], an '''anytime algorithm''' is an [[algorithm]] that can return a valid solution to a [[Computational problem|problem]] even if it is interrupted before it ends. The algorithm is expected to find better and better solutions the longer it keeps running.
Most algorithms run to completion: they provide a single answer after performing some fixed amount of computation. In some cases, however, the user may wish to terminate the algorithm prior to completion. The amount of computation required may be substantial, for example, and computational resources might need to be reallocated. Most algorithms either run to completion or they provide no useful solution information. Anytime algorithms, however, are able to return a partial answer, whose quality depends on the amount of computation they were able to perform. The answer generated by anytime algorithms is an approximation of the correct answer.
==
An anytime algorithm may be also called an "interruptible algorithm". They are different from contract algorithms, which must declare a time in advance; in an anytime algorithm, a process can just announce that it is terminating.<ref name="Hendler">{{cite book |editor-first=James A. |editor-last=Hendler |title=Artificial Intelligence Planning Systems: Proceedings of the First Conference (AIPS 92) |publisher=Elsevier |orig-date=1992 |date=2014 |isbn=978-0-08-049944-4 |url={{GBurl|tgujBQAAQBAJ|pg=PR5}}}}</ref>
== Goals ==
== Goal of Anytime Algorithm ==▼
The goal of anytime algorithms are to give [[Hybrid intelligent system|intelligent systems]] the ability to make results of better quality in return for turn
What makes anytime algorithms unique is their ability to return many possible outcomes for any given
This is often used for large decision set problems.<ref name="Horsch"/> This would generally not provide useful information unless it is allowed to finish.<ref name="Bender">{{cite book |first=Edward A. |last=Bender |title=Mathematical Methods In Artificial Intelligence |publisher=Wiley |date=1996 |isbn=978-0-8186-7200-2 }}</ref> While this may sound similar to [[dynamic programming]], the difference is that it is fine-tuned through random adjustments, rather than sequential.
Anytime algorithms are designed
== Decision trees ==
When the decider has to act, there must be some ambiguity. Also, there must be some idea about how to solve this ambiguity. This idea must be translatable to a state to action diagram
The performance profile estimates the quality of the results based on the input and the amount of time that is allotted to the algorithm
▲When the decider has to act, there must be some ambiguity. Also, there must be some idea about how to solve this ambiguity. This idea must be translatable to a state to action diagram <ref>Horsch</ref>.
▲==Performance Profile==
▲The performance profile estimates the quality of the results based on the input and the amount of time that is allotted to the algorithm <ref>Grass</ref>. The better the estimate, the sooner the result would be found <ref>Grass</ref>. Some systems have a larger database that gives the probability that the output is the expected output <ref>Grass</ref>. It is important to note that one algorithm can have several performance profiles <ref>Teije</ref>. Most of the time performance profiles are constructed using mathematical statistics using representative cases. For example in the traveling salesman problem, the performance profile was generated using a user-defined special program to generate the necessary statistics <ref>Hendler</ref>. In this example, the performance profile is the mapping of time to the expected results <ref>Hendler</ref>. This quality can be measured in several ways:
*specificity: where the amount of particulars determine quality<ref name="Hendler"/>
==Algorithm prerequisites==
▲certainty: where probability of correctness determines quality <ref>Hendler</ref>
Initial behavior: While some algorithms start with immediate guesses, others take a more calculated approach and have a start up period before making any guesses
*Growth direction: How the quality of the program's
▲accuracy: where error bound determines quality <ref>Hendler</ref>
*Growth rate: Amount of increase with each step. Does it change constantly, such as in a [[bubble sort]] or does it change unpredictably?▼
▲Initial behavior: While some algorithms start with immediate guesses, others take a more calculated approach and have a start up period before making any guesses <ref>Teije</ref>
▲Growth direction: How the quality of the program changes with increasing runtime <ref>Teije</ref>
▲Growth rate: Amount of increase with each step. Does it change constantly, such as in a bubble sort or does it change unpredictably
==References==
{{reflist}}
== Further reading ==
{{refbegin}}
*{{cite conference |last1=Boddy |first1=M. |last2=Dean |first2=T. |title=Solving time-dependent planning problems |book-title=Proceedings of the 11th international joint conference on Artificial intelligence |volume=2 |date=1989 |isbn= |pages=979–984 |url=https://dl.acm.org/doi/abs/10.5555/1623891.1623912 |id=Brown University CS-89-03}}
* {{cite journal |last1=Grass |first1=J. |last2=Zilberstein |first2=S. |title=Anytime Algorithm Development Tools |journal=ACM SIGART Bulletin |volume=7 |issue=2 Special Issue on Anytime Algorithms and Deliberation Scheduling |pages= 20–27|date=1996 |doi=10.1145/242587.242592 |s2cid=7670055 |url=https://dl.acm.org/doi/abs/10.1145/242587.242592|url-access=subscription }}
* {{cite conference |arxiv=1301.7384 |last1=Horsch |first1=M.C. |last2=Poole |first2=D. |title=An anytime algorithm for decision making under uncertainty |book-title=Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence |date=1998 |isbn=978-1-55860-555-8 |pages=246–255 |url=http://www.cs.ubc.ca/spider/poole/papers/randaccref.pdf}}
* {{cite tech report |first=E.J. |last=Horvitz |title=Reasoning about inference tradeoffs in a world of bounded resources |publisher=Medical Computer Science Group, Section on Medical Informatics, Stanford University |id=KSL-86-55 |date=March 1986 |url=}}
* {{cite journal |last1=Wallace |first1=R. |last2=Freuder |first2=E. |title=Anytime Algorithms for Constraint Satisfaction and SAT Problems |journal= ACM SIGART Bulletin |volume=7 |issue=2 |pages=7–10 |date=1995 |doi=10.1145/242587.242589 |s2cid=8250394 |doi-access=free }}
* {{cite thesis |first=S. |last=Zilberstein |title=Operational Rationality through Compilation of Anytime Algorithms |publisher=Computer Science Division, University of California at Berkeley |type=PhD |date=1993 |url=https://dl.acm.org/doi/abs/10.5555/193131 |id=UMX GAX94-08166}}
*{{cite journal |first=Shlomo |last=Zilberstein |title=Using Anytime Algorithms in Intelligent Systems |journal=AI Magazine |volume=17 |issue=3 |pages=73–83 |date=1996 |doi= |url=http://rbr.cs.umass.edu/shlomo/papers/Zaimag96.pdf}}
{{refend}}
[[Category:Artificial intelligence engineering]]
[[Category:Search algorithms]]
|