Content deleted Content added
Hughperkins (talk | contribs) →Other: Remove links to non-existent pages Tags: Mobile edit Mobile web edit |
Hughperkins (talk | contribs) →Multivariate factorizations: Remove links to non-existent pages Tags: Mobile edit Mobile web edit |
||
Line 117:
The learning of PGMs encoding multivariate distributions is a computationally expensive task, therefore, it is usual for EDAs to estimate multivariate statistics from bivariate statistics. Such relaxation allows PGM to be built in polynomial time in <math>N</math>; however, it also limits the generality of such EDAs.
===
The ECGA<ref>{{cite book|last1=Harik|first1=Georges Raif|title=Learning Gene Linkage to Efficiently Solve Problems of Bounded Difficulty Using Genetic Algorithms|publisher=University of Michigan|url=http://dl.acm.org/citation.cfm?id=269517}}</ref> was one of the first EDA to employ multivariate factorizations, in which high-order dependencies among decision variables can be modeled. Its approach factorizes the joint probability distribution in the product of multivariate marginal distributions. Assume <math>T_\text{eCGA}=\{\tau_1,\dots,\tau_\Psi\}</math> is a set of subsets, in which every <math>\tau\in T_\text{eCGA}</math> is a linkage set, containing <math>|\tau|\leq K</math> variables. The factorized joint probability distribution is represented as follows
Line 124:
</math>
The ECGA popularized the term
<math>
Line 142:
</math>
===
The BOA<ref>{{cite journal|last1=Pelikan|first1=Martin|last2=Goldberg|first2=David E.|last3=Cantu-Paz|first3=Erick|title=BOA: The Bayesian Optimization Algorithm|date=1 January 1999|pages=525–532|url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.46.8131|publisher=Morgan Kaufmann}}</ref><ref>{{cite book|last1=Pelikan|first1=Martin|title=Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms|date=2005|publisher=Springer|___location=Berlin [u.a.]|isbn=978-3-540-23774-7|edition=1st}}</ref><ref>{{cite journal|last1=Wolpert|first1=David H.|last2=Rajnarayan|first2=Dev|title=Using Machine Learning to Improve Stochastic Optimization|journal=Proceedings of the 17th AAAI Conference on Late-Breaking Developments in the Field of Artificial Intelligence|date=1 January 2013|pages=146–148|url=http://dl.acm.org/citation.cfm?id=2908286.2908335|publisher=AAAI Press}}</ref> uses Bayesian networks to model and sample promising solutions. Bayesian networks are directed acyclic graphs, with nodes representing variables and edges representing conditional probabilities between pair of variables. The value of a variable <math>x_i</math> can be conditioned on a maximum of <math>K</math> other variables, defined in <math>\pi_i</math>. BOA builds a PGM encoding a factorized joint distribution, in which the parameters of the network, i.e. the conditional probabilities, are estimated from the selected population using the maximum likelihood estimator.
Line 155:
</math>
===
The LTGA<ref>{{cite journal|last1=Thierens|first1=Dirk|title=The Linkage Tree Genetic Algorithm|journal=Parallel Problem Solving from Nature, PPSN XI|date=11 September 2010|pages=264–273|doi=10.1007/978-3-642-15844-5_27|url=https://link.springer.com/chapter/10.1007%2F978-3-642-15844-5_27#page-1|publisher=Springer Berlin Heidelberg}}</ref> differs from most EDA in the sense it does not explicitly model a probabilisty distribution but only a linkage model, called linkage-tree. A linkage <math>T</math> is a set of linkage sets with no probability distribution associated, therefore, there is no way to sample new solutions directly from <math>T</math>. The linkage model is a linkage-tree produced stored as a [[Family of sets]] (FOS).
|