Content deleted Content added
No edit summary |
Jeanpmartins (talk | contribs) No edit summary |
||
Line 3:
[[Image:Eda mono-variant gauss iterations.svg|thumb|350px|Estimation of distribution algorithm. For each iteration ''i'', a random draw is performed for a population ''P'' in a distribution ''PDu''. The distribution parameters ''PDe'' are then estimated using the selected points ''PS''. The illustrated example optimizes a continuous objective function ''f(X)'' with a unique optimum ''O''. The sampling (following a normal distribution ''N'') concentrates around the optimum as one goes along unwinding algorithm.]]
''Estimation of distribution algorithms'' (EDAs), sometimes called ''probabilistic model-building genetic algorithms'' (PMBGAs), are [[stochastic optimization]] methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions. Optimization is viewed as a series of incremental updates of a probabilistic model, starting with the model encoding the uniform distribution over admissible solutions and ending with the model that generates only the global optima.<ref>{{cite book|last1=Jose A. Lozano|first1=Pedro Larrañaga,|title=Estimation of Distribution Algorithms a New Tool for Evolutionary Computation|date=2002|publisher=Springer US|___location=Boston, MA|isbn=978-1-4615-1539-5}}</ref><ref>{{cite book|last1=Larrañaga, P., Inza, I., Bengoetxea, E.|first1=Jose A. Lozano|title=Towards a new evolutionary computation advances in the estimation of distribution algorithms|date=2006|publisher=Springer|___location=Berlin|isbn=978-3-540-32494-2}}</ref><ref>{{cite book|last1=Pelikan|first1=Martin|last2=Sastry|first2=Kumara|last3=Cantú-Paz|first3=Erick|title=Scalable optimization via probabilistic modeling : from algorithms to applications ; with 26 tables|date=2006|publisher=Springer|___location=Berlin|isbn=3540349537}}</ref>
EDAs belong to the class of [[evolutionary algorithms]]. The main difference between EDAs and most conventional evolutionary algorithms is that evolutionary algorithms generate new candidate solutions using an ''implicit'' distribution defined by one or more variation operators, whereas EDAs use an ''explicit'' probability distribution encoded by a [[Bayesian network]], a [[multivariate normal distribution]], or another model class. Similarly as other evolutionary algorithms, EDAs can be used to solve optimization problems defined over a number of representations from vectors to [[LISP]] style S expressions, and the quality of candidate solutions is often evaluated using one or more objective functions.
Line 21:
Better-known EDAs include
<ref>{{cite journal|last1=Pelikan|first1=Martin|last2=Goldberg|first2=David E.|last3=Lobo|first3=Fernando G.|title=A Survey of Optimization by Building and Using Probabilistic Models|journal=Computational Optimization and Applications|date=2002|volume=21|issue=1|pages=5–20|doi=10.1023/A:1013500812258}}</ref>▼
* [[Population-based incremental learning]] (PBIL)
* [[Probability Collectives]] (PC)<ref>{{cite journal|last1=WOLPERT|first1=DAVID H.|last2=STRAUSS|first2=CHARLIE E. M.|last3=RAJNARAYAN|first3=DEV|title=ADVANCES IN DISTRIBUTED OPTIMIZATION USING PROBABILITY COLLECTIVES|journal=Advances in Complex Systems|date=December 2006|volume=09|issue=04|pages=383–436|doi=10.1142/S0219525906000884}}</ref>▼
* [[Hill Climbing with Learning]] (HCwL)
* [[Compact Genetic Algorithm]] (cGA)
Line 30 ⟶ 35:
* [[Bivariate Marginal Distribution Algorithm]] (BMDA)
* [[Extended Compact Genetic Algorithm]] (ECGA)
* [[Bayesian Optimization Algorithm]] (BOA)<ref>{{cite book|last1=Pelikan|first1=Martin|title=Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms|date=2005|publisher=Springer|___location=Berlin [u.a.]|isbn=978-3-540-23774-7|edition=1st ed.}}</ref><ref>{{cite journal|last1=Wolpert|first1=David H.|last2=Rajnarayan|first2=Dev|title=Using Machine Learning to Improve Stochastic Optimization|journal=Proceedings of the 17th AAAI Conference on Late-Breaking Developments in the Field of Artificial Intelligence|date=1 January 2013|pages=146–148|url=http://dl.acm.org/citation.cfm?id=2908286.2908335|publisher=AAAI Press}}</ref>▼
* [[Estimation of Bayesian Networks Algorithm]] (EBNA)
* [[Stochastic hill climbing with learning by vectors of normal distributions]] (SHCLVND)
Line 40 ⟶ 46:
==References==
▲<ref>{{cite journal|last1=Pelikan|first1=Martin|last2=Goldberg|first2=David E.|last3=Lobo|first3=Fernando G.|title=A Survey of Optimization by Building and Using Probabilistic Models|journal=Computational Optimization and Applications|date=2002|volume=21|issue=1|pages=5–20|doi=10.1023/A:1013500812258}}</ref>
▲<ref>{{cite journal|last1=WOLPERT|first1=DAVID H.|last2=STRAUSS|first2=CHARLIE E. M.|last3=RAJNARAYAN|first3=DEV|title=ADVANCES IN DISTRIBUTED OPTIMIZATION USING PROBABILITY COLLECTIVES|journal=Advances in Complex Systems|date=December 2006|volume=09|issue=04|pages=383–436|doi=10.1142/S0219525906000884}}</ref>
▲<ref>{{cite journal|last1=Wolpert|first1=David H.|last2=Rajnarayan|first2=Dev|title=Using Machine Learning to Improve Stochastic Optimization|journal=Proceedings of the 17th AAAI Conference on Late-Breaking Developments in the Field of Artificial Intelligence|date=1 January 2013|pages=146–148|url=http://dl.acm.org/citation.cfm?id=2908286.2908335|publisher=AAAI Press}}</ref>
{{DEFAULTSORT:Estimation Of Distribution Algorithm}}
|