Genetic algorithm: Difference between revisions

Content deleted Content added
CoderGnome (talk | contribs)
Variants: Added note about PBIL
m Chromosome representation: adding early innovations in variable-length representations
 
Line 1:
{{short description|Competitive algorithm for searching a problem space}}
A '''genetic algorithm (GA)''' is a [[search]] [[technique]] used in [[computing]] to find exact or [[approximate]] solutions to [[Optimization (mathematics)|optimization]] and [[search]] [[problem]]s. Genetic algorithms are [[categorize]]d as [[global optimization|global search heuristics]]. Genetic algorithms are a particular class of [[evolutionary algorithm]]s (also known as [[evolutionary computation]]) that use techniques inspired by [[evolutionary biology]] such as [[Biological inheritance|inheritance]], [[Mutation (genetic algorithm)|mutation]], [[selection (genetic algorithm)|selection]], and [[crossover (genetic algorithm)|crossover]] (also called [[recombination]]).
{{Evolutionary algorithms}}
{{Use dmy dates|date=November 2020}}
[[Image:St 5-xband-antenna.jpg|thumb|The 2006 NASA [[Space Technology 5|ST5]] spacecraft antenna. This complicated shape was found by an evolutionary computer design program to create the best radiation pattern. It is known as an [[evolved antenna]].]]
<!-- Deleted image removed: [[Image:ESA JAXA HUMIES Trajectory.png|thumb|The ESA/JAXA interplanetary Trajectory recipient of the [http://www.genetic-programming.org/combined.php 2013 gold HUMIES ] award. This complex tour of the Jovian Moons was found with the help of an evolutionary technique based on self-adaptation]] -->
In [[computer science]] and [[operations research]], a '''genetic algorithm''' ('''GA''') is a [[metaheuristic]] inspired by the process of [[natural selection]] that belongs to the larger class of [[evolutionary algorithm]]s (EA).<ref>{{Cite book |last1=Pétrowski |first1=Alain |last2=Ben-Hamida |first2=Sana |title=Evolutionary algorithms |publisher=John Wiley & Sons |page=30 |year=2017 |isbn=978-1-119-13638-5 |url=https://books.google.com/books?id=GlGpDgAAQBAJ&dq=genetic+algorithm+evolutionary+algorithms&pg=PP2}}</ref> Genetic algorithms are commonly used to generate high-quality solutions to [[Optimization (mathematics)|optimization]] and [[Search algorithm|search problem]]s via biologically inspired operators such as [[selection (genetic algorithm)|selection]], [[crossover (genetic algorithm)|crossover]], and [[Mutation (genetic algorithm)|mutation]].{{sfn|Mitchell|1996|p=2}} Some examples of GA applications include optimizing [[Decision tree learning|decision trees]] for better performance, solving [[Sudoku solving algorithms|sudoku puzzles]],<ref>{{Cite book|last1=Gerges|first1=Firas|last2=Zouein|first2=Germain|last3=Azar|first3=Danielle|title=Proceedings of the 2018 International Conference on Computing and Artificial Intelligence |chapter=Genetic Algorithms with Local Optima Handling to Solve Sudoku Puzzles |date=2018-03-12|chapter-url=https://doi.org/10.1145/3194452.3194463|series=ICCAI 2018|___location=New York, NY, USA|publisher=Association for Computing Machinery|pages=19–22|doi=10.1145/3194452.3194463|isbn=978-1-4503-6419-5|s2cid=44152535 }}</ref> [[hyperparameter optimization]], and [[causal inference]].<ref>{{cite journal |last1=Burkhart |first1=Michael C. |last2=Ruiz |first2=Gabriel |title=Neuroevolutionary representations for learning heterogeneous treatment effects |journal=Journal of Computational Science |date=2023 |volume=71 |page=102054 |doi=10.1016/j.jocs.2023.102054 |s2cid=258752823 |doi-access=free }}</ref>
 
== Methodology ==
Genetic algorithms are [[Implementation|implement]]ed as a [[computer simulation]] in which a [[population]] of [[abstract]] representations (called [[chromosome (genetic algorithm)|chromosomes]] or the [[genotype]] or the [[genome]]) of [[candidate solutions]] (called individuals, creatures, or [[phenotype]]s) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are [[Stochastics|stochastically]] selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the [[algorithm]]. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.
 
=== Optimization problems ===
Genetic algorithms find application in [[biogenetics]], [[computer science]], [[engineering]], [[economics]], [[chemistry]], [[manufacturing]], [[mathematics]], [[physics]] and other fields.
In a genetic algorithm, a [[population]] of [[candidate solution]]s (called individuals, creatures, organisms, or [[phenotype]]s) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its [[chromosome]]s or [[genotype]]) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.{{sfn|Whitley|1994|p=66}}
 
The evolution usually starts from a population of randomly generated individuals, and is an [[Iteration|iterative process]], with the population in each iteration called a ''generation''. In each generation, the [[fitness (biology)|fitness]] of every individual in the population is evaluated; the fitness is usually the value of the [[objective function]] in the optimization problem being solved. The more fit individuals are [[Stochastics|stochastically]] selected from the current population, and each individual's genome is modified ([[Crossover (genetic algorithm)|recombined]] and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the [[algorithm]]. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires two things to be defined:
 
A typical genetic algorithm requires:
# a [[genetic representation]] of the solution [[___domain]],
 
# a [[genetic representation]] of the solution ___domain,
# a [[fitness function]] to evaluate the solution ___domain.
 
A standard representation of theeach candidate solution is as an [[bit array|array of bitbits]]s (also called ''bit set'' or ''bit string'').{{sfn|Whitley|1994|p=66}} Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, thatwhich facilitates simple [[Crossover (genetic algorithm)|crossover]] operationoperations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in [[Geneticgenetic programming]] and graph-form representations are explored in [[Evolutionaryevolutionary programming]]; a mix of both linear chromosomes and trees is explored in [[gene expression programming]].
 
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
The fitness function is defined over the genetic representation and measures the ''quality'' of the represented solution. The fitness function is always problem dependent. For instance, in the [[knapsack problem]] we want to maximize the total value of objects that we can put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The ''fitness'' of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise. In some problems, it is hard or even impossible to define the fitness expression; in these cases, [[interactive genetic algorithm]]s are used.
 
==== Initialization ====
Once we have the genetic representation and the fitness function defined, GA proceeds to initialize a population of solutions randomly, then improve it through repetitive application of mutation, crossover, inversion and selection operators.
The population size depends on the nature of the problem, but typically contains hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the [[Feasible region|search space]]). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found or the distribution of the sampling probability tuned to focus in those areas of greater interest.<ref>{{cite journal |last1=Luque-Rodriguez |first1=Maria |last2=Molina-Baena |first2=Jose |last3=Jimenez-Vilchez |first3=Alfonso |last4=Arauzo-Azofra |first4=Antonio |title=Initialization of Feature Selection Search for Classification (sec. 3) |journal=Journal of Artificial Intelligence Research |date=2022 |volume=75 |pages=953–983 |doi=10.1613/jair.1.14015 |url=https://www.jair.org/index.php/jair/article/view/14015|doi-access=free }}</ref>
 
===Initialization= Selection ====
{{Main|Selection (genetic algorithm)}}
During each successive generation, a portion of the existing population is [[selection (genetic algorithm)|selected]] to reproduce for a new generation. Individual solutions are selected through a ''fitness-based'' process, where [[Fitness (biology)|fitter]] solutions (as measured by a [[fitness function]]) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
 
The fitness function is defined over the genetic representation and measures the ''quality'' of the represented solution. The fitness function is always problem-dependent. For instance, in the [[knapsack problem]] one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The ''fitness'' of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
Initially many individual solutions are randomly generated to form an initial population. The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Traditionally, the population is generated randomly, covering the entire range of possible solutions (the ''search space''). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.
 
In some problems, it is hard or even impossible to define the fitness expression; in these cases, a [[Computer simulation|simulation]] may be used to determine the fitness function value of a [[phenotype]] (e.g. [[computational fluid dynamics]] is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even [[Interactive evolutionary computation|interactive genetic algorithms]] are used.
=== Selection ===
 
==== Genetic operators ====
During each successive generation, a proportion of the existing population is [[selection (genetic algorithm)|selected]] to breed a new generation. Individual solutions are selected through a ''fitness-based'' process, where [[Fitness (biology)|fitter]] solutions (as measured by a [[fitness function]]) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as this process may be very time-consuming.
{{Main|Crossover (genetic algorithm)|Mutation (genetic algorithm)}}
 
The next step is to generate a second generation population of solutions from those selected, through a combination of [[genetic operator]]s: [[crossover (genetic algorithm)|crossover]] (also called recombination), and [[mutation (genetic algorithm)|mutation]].
Most functions are [[stochastic]] and designed so that a small proportion of less fit solutions are selected. This helps keep the diversity of the population large, preventing premature convergence on poor solutions. Popular and well-studied selection methods include [[fitness proportionate selection|roulette wheel selection]] and [[tournament selection]].
 
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated.
=== Reproduction ===
Although reproduction methods that are based on the use of two parents are more "biology inspired", some research<ref>Eiben, A. E. et al (1994). "Genetic algorithms with multi-parent recombination". PPSN III: Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: 78&ndash;87. {{ISBN|3-540-58484-6}}.</ref><ref>Ting, Chuan-Kang (2005). "On the Mean Convergence Time of Multi-parent Genetic Algorithms Without Selection". Advances in Artificial Life: 403&ndash;412. {{ISBN|978-3-540-28848-0}}.</ref> suggests that more than two "parents" generate higher quality chromosomes.
{{main|crossover (genetic algorithm)|mutation (genetic algorithm)}}
 
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
The next step is to generate a second generation population of solutions from those selected through [[genetic operator]]s: [[crossover (genetic algorithm)|crossover]] (also called recombination), and/or [[mutation (genetic algorithm)|mutation]].
 
Opinion is divided over the importance of crossover versus mutation. There are many references in [[David B. Fogel|Fogel]] (2006) that support the importance of mutation-based search.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each child, and the process continues until a new population of solutions of appropriate size is generated.
 
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.{{citation needed|date=November 2019}}
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions, for reasons already mentioned above.
 
It is worth tuning parameters such as the [[Mutation (genetic algorithm)|mutation]] probability, [[Crossover (genetic algorithm)|crossover]] probability and population size to find reasonable settings for the problem's [[complexity class]] being worked on. A very small mutation rate may lead to [[genetic drift]] (which is non-[[Ergodicity|ergodic]] in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless [[#Elitism|elitist selection]] is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.
=== Termination ===
 
==== Heuristics ====
This generational process is repeated until a termination condition has been reached. Common terminating conditions are
 
In addition to the main operators above, other [[heuristic]]s may be employed to make the calculation faster or more robust. The ''speciation'' heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.<ref>{{Cite book|title=Handbook of Evolutionary Computation|last1=Deb|first1=Kalyanmoy|last2=Spears|first2=William M.|s2cid=3547258|publisher=Institute of Physics Publishing|year=1997|chapter=C6.2: Speciation methods}}</ref><ref>{{Cite book|title=Handbook of Natural Computing|last=Shir|first=Ofer M.|date=2012|publisher=Springer Berlin Heidelberg|isbn=9783540929093|editor-last=Rozenberg|editor-first=Grzegorz|pages=1035–1069|language=en|chapter=Niching in Evolutionary Algorithms|doi=10.1007/978-3-540-92910-9_32|editor-last2=Bäck|editor-first2=Thomas|editor-last3=Kok|editor-first3=Joost N.}}</ref>
 
==== Termination ====
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
 
* A solution is found that satisfies minimum criteria
Line 44 ⟶ 61:
* The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
* Manual inspection
* Combinations of the above.
 
== The building block hypothesis ==
===Pseudo-code algorithm===
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
# Choose initial [[population]]
# Evaluate the [[fitness]] of each [[individual]] in the population
# Repeat
## Select best-ranking individuals to [[reproduce]]
## [[Breed]] new [[generation]] through [[crossover (genetic algorithm)|crossover]] and [[mutation (genetic algorithm)|mutation]] (genetic operations) and give birth to [[offspring]]
## [[Evaluate]] the individual fitnesses of the offspring
## Replace worst ranked part of population with offspring
# Until <terminating condition>
 
# A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length [[Schema (genetic algorithms)|schemata]] with above average fitness.
===Observations===
# A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic.
There are several general observations about the generation of solutions via a genetic algorithm:
 
Goldberg describes the heuristic as follows:
* In many problems, GAs may have a tendency to converge towards [[local optimum|local optima]] or even arbitrary points rather than the [[global optimum]] of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the [[fitness landscape]]: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the [[No free lunch in search and optimization|No Free Lunch]] theorem proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Diversity is important in [[genetic algorithms]] (and [[genetic programming]]) because crossing over a homogeneous population does not yield new solutions. In [[evolution strategies]] and [[evolutionary programming]], diversity is not essential beacuse of a greater reliance on mutation.
* Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called ''triggered hypermutation''), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called ''random immigrants''). Recent research has also shown the benefits of using biological [[exaptation]] (or [[preadaptation]]) in solving this problem. Again, [[evolution strategies]] and [[evolutionary programming]] can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.
* GAs cannot effectively solve problems in which the only fitness measure is right/wrong, as there is no way to converge on the solution. (No hill to climb.) In these cases, a random search may find a solution as quickly as a GA.
* Selection is clearly an important genetic operator, but opinion is divided over the importance of crossover versus mutation. Some argue that crossover is the most important, while mutation is only necessary to ensure that potential solutions are not lost. Others argue that crossover in a largely uniform population only serves to propagate innovations originally found by mutation, and in a non-uniform population crossover is nearly always equivalent to a very large mutation (which is likely to be catastrophic). There are many references in Fogel (2006) that support the importance of mutation-based search, but across all problems the [[No free lunch in search and optimization|No Free Lunch]] theorem holds, so these opinions are without merit unless the discussion is restricted to a particular problem.
* Often, GAs can rapidly locate ''good'' solutions, even for difficult search spaces. The same is of course also true for [[evolution strategies]] and [[evolutionary programming]].
* For specific optimization problems and problem instantiations, simpler optimization algorithms may find better solutions than genetic algorithms (given the same amount of computation time). Alternative and complementary algorithms include [[evolution strategies]], [[evolutionary programming]], [[simulated annealing]], [[Gaussian adaptation]], [[hill climbing]], and [[swarm intelligence]] (e.g.: [[ant colony optimization]], [[particle swarm optimization]]).
* As with all current machine learning problems it is worth tuning the parameters such as [[mutation]] probability, [[recombination]] probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to [[genetic drift]] (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions unless there is elitist selection. There are theoretical but not yet practical upper and lower bounds for these parameters that can help guide selection.
* The implementation and evaluation of the fitness function is an important factor in the speed and efficiency of the algorithm.
 
:"Short, low order, and highly fit schemata are sampled, [[crossover (genetic algorithm)|recombined]] [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
==Variants==
 
:"Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks."{{sfn|Goldberg|1989|p=41}}
The simplest algorithm represents each chromosome as a [[bit]] string. Typically, numeric parameters can be represented by [[integer]]s, though it is possible to use [[floating point]] representations. The floating point representation is natural to [[evolution strategies]] and [[evolutionary programming]]. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a [[linked list]], [[associative array|hashes]], [[object (computer science)|objects]], or any other imaginable [[data structure]]. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
 
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many [[estimation of distribution algorithm]]s, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold.<ref>{{cite book|last1=Harik|first1=Georges R.|last2=Lobo|first2=Fernando G.|last3=Sastry|first3=Kumara|title=Scalable Optimization via Probabilistic Modeling |chapter=Linkage Learning via Probabilistic Modeling in the Extended Compact Genetic Algorithm (ECGA) |volume=33|date=1 January 2006|pages=39–61|doi=10.1007/978-3-540-34954-9_3|language=en|series=Studies in Computational Intelligence|isbn=978-3-540-34953-2}}</ref><ref>{{cite book|last1=Pelikan|first1=Martin|last2=Goldberg|first2=David E.|last3=Cantú-Paz|first3=Erick|title=BOA: The Bayesian Optimization Algorithm|journal=Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1|date=1 January 1999|pages=525–532|url=http://dl.acm.org/citation.cfm?id=2933973|isbn=9781558606111|series=Gecco'99}}</ref> Although good results have been reported for some [[List of complexity classes|classes of problem]]s, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs' efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.<ref>{{cite book|last1=Coffin|first1=David|last2=Smith|first2=Robert E.|title=Linkage in Evolutionary Computation |chapter=Linkage Learning in Estimation of Distribution Algorithms |volume=157|date=1 January 2008|pages=141–156|doi=10.1007/978-3-540-85068-7_7|language=en|series=Studies in Computational Intelligence|isbn=978-3-540-85067-0}}</ref><ref>{{cite journal|last1=Echegoyen|first1=Carlos|last2=Mendiburu|first2=Alexander|last3=Santana|first3=Roberto|last4=Lozano|first4=Jose A.|title=On the Taxonomy of Optimization Problems Under Estimation of Distribution Algorithms|journal=Evolutionary Computation|date=8 November 2012|volume=21|issue=3|pages=471–495|doi=10.1162/EVCO_a_00095|pmid=23136917|s2cid=26585053|issn=1063-6560}}</ref><ref>{{cite book|last1=Sadowski|first1=Krzysztof L.|last2=Bosman|first2=Peter A.N.|last3=Thierens|first3=Dirk|title=Proceedings of the 15th annual conference on Genetic and evolutionary computation |chapter=On the usefulness of linkage processing for solving MAX-SAT |date=1 January 2013|pages=853–860|doi=10.1145/2463372.2463474|isbn=9781450319638|series=Gecco '13|hdl=1874/290291|s2cid=9986768}}</ref>
When bit strings representations of integers are used, [[Gray coding]] is often employed. In this way, small changes in the integer can be readily effected through mutations or crossovers. This has been found to help prevent premature convergence at so called ''Hamming walls'', in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
 
== Limitations ==
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Theoretically, the smaller the alphabet, the better the performance, but paradoxically, good results have been obtained from using real-valued chromosomes.
{{more citations needed section|date=March 2024}}
The practical use of a genetic algorithm has limitations, especially as compared to alternative optimization algorithms:
 
* Repeated [[fitness function]] evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive [[fitness function]] evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an [[fitness approximation|approximated fitness]] that is computationally efficient. It is apparent that amalgamation of [[fitness approximation|approximate models]] may be one of the most promising approaches to convincingly use GA to solve complex real life problems.{{citation needed|date=March 2024}}
A very successful (slight) variant of the general process of constructing a new population is to allow some of the better organisms from the current generation to carry over to the next, unaltered. This strategy is known as ''elitist selection''.
* Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane {{Citation needed|date=December 2020}}. In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.{{citation needed|date=March 2024}}
* The "better" solution is only in comparison to other solutions. As a result, the stopping criterion is not clear in every problem.{{citation needed|date=March 2024}}
* In many problems, GAs have a tendency to converge towards [[local optimum|local optima]] or even arbitrary points rather than the [[global optimum]] of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the [[fitness landscape]]: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions,<ref>{{cite journal|last1=Taherdangkoo|first1=Mohammad|last2=Paziresh |first2=Mahsa |last3=Yazdi |first3=Mehran |last4= Bagheri |first4=Mohammad Hadi |title=An efficient algorithm for function optimization: modified stem cells algorithm|journal=Central European Journal of Engineering|date=19 November 2012|volume=3|issue=1|pages=36–50|doi=10.2478/s13531-012-0047-8|doi-access=free}}</ref> although the [[No free lunch in search and optimization|No Free Lunch theorem]]<ref>Wolpert, D.H., Macready, W.G., 1995. No Free Lunch Theorems for Optimisation. Santa Fe Institute, SFI-TR-05-010, Santa Fe.</ref> proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and [[genetic programming]]) because crossing over a homogeneous population does not yield new solutions. In [[Evolution strategy|evolution strategies]] and [[evolutionary programming]], diversity is not essential because of a greater reliance on mutation.{{citation needed|date=March 2024}}
* Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called ''triggered hypermutation''), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called ''random immigrants''). Again, [[Evolution strategy|evolution strategies]] and [[evolutionary programming]] can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.{{citation needed|date=March 2024}}
* GAs cannot effectively solve problems in which the only fitness measure is a binary pass/fail outcome (like [[decision problem]]s), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.{{citation needed|date=March 2024}}
* For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include [[Evolution strategy|evolution strategies]], [[evolutionary programming]], [[simulated annealing]], [[Gaussian adaptation]], [[hill climbing]], and [[swarm intelligence]] (e.g.: [[ant colony optimization]], [[particle swarm optimization]]) and methods based on [[integer linear programming]]. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches.{{citation needed|date=March 2024}}
 
== Variants ==
Parallel implementations of genetic algorithms come in two flavours. Coarse grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function.
 
=== Chromosome representation ===
It can be quite effective to combine GA with other optimization methods. GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA while overcoming the lack of robustness of hill climbing.
{{main | genetic representation }}
The simplest algorithm represents each chromosome as a [[Bit array|bit string]]. Typically, numeric parameters can be represented by [[integer]]s, though it is possible to use [[floating point]] representations. The floating point representation is natural to [[Evolution strategy|evolution strategies]] and [[evolutionary programming]]. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by [[John Henry Holland]] in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a [[linked list]], [[associative array|hashes]], [[object (computer science)|objects]], or any other imaginable [[data structure]]. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
 
When bit-string representations of integers are used, [[Gray coding]] is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called ''Hamming walls'', in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
A problem that seems to be overlooked by GA-algorithms thus far is that the natural evolution maximizes [[mean fitness]] rather than the fitness of the individual (the criterion function used in most applications).
An algorithm that maximizes mean fitness (without any need for the definition of mean fitness as a criterion function) is [[Gaussian adaptation]], See Kjellström 1970<ref>{{cite journal|last=Kjellström|first=G.|title=Optimization of electrical Networks with respect to Tolerance Costs.|journal=Ericsson Technics|issue=3|pages=157-175|year=1970}}</ref>, provided that the ontogeny of an individual may be seen as a modified recapitulation of evolutionary random steps in the past and that the sum of many random steps tend to become Gaussian distributed (according to the [[central limit theorem]]).
This means that the rules of genetic variation may have a different meaning in the natural case. For instance - provided that steps are stored in consecutive order - crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the [[inversion operator]] has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency. (See for instance [http://www.evolution-in-a-nutshell.se/traveller.htm] or example in [[travelling salesman problem]].)
 
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a ''virtual alphabet'' (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.<ref name=Goldberg1991>{{cite book|last=Goldberg|first=David E.|title=Parallel Problem Solving from Nature|chapter=The theory of virtual alphabets|journal=Parallel Problem Solving from Nature, Lecture Notes in Computer Science|year=1991|volume=496|pages=13–22|doi=10.1007/BFb0029726|series=Lecture Notes in Computer Science|isbn=978-3-540-54148-6}}</ref><ref name=Janikow1991>{{cite journal|last1=Janikow|first1=C. Z.|first2=Z. |last2=Michalewicz |title=An Experimental Comparison of Binary and Floating Point Representations in Genetic Algorithms|journal=Proceedings of the Fourth International Conference on Genetic Algorithms|year=1991|pages=31–36|url=http://www.cs.umsl.edu/~janikow/publications/1991/GAbin/text.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://www.cs.umsl.edu/~janikow/publications/1991/GAbin/text.pdf |archive-date=2022-10-09 |url-status=live|access-date=2 July 2013}}</ref>
Gaussian adaptation is able to approximate the natural process by an adaptation of the moment matrix of the Gaussian. So, because very many quantitative characters are Gaussian distributed in a large population, Gaussian adaptation may serve as a genetic algorithm replacing the rules of genetic variation by a Gaussian random number generator working on the phenotypic level. See Kjellström 1996<ref>{{cite journal|last=Kjellström|first=G.|title=Evolution as a statistical optimization algorithm|journal=Evolutionary Theory|issue=11|pages=105-117|month=January|year=1996}}</ref>
 
An expansion of the Genetic Algorithm accessible problem ___domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome.<ref name=Patrascu2014>{{cite journal|last1=Patrascu|first1=M.|last2=Stancu|first2=A.F.|last3=Pop|first3=F.|title=HELGA: a heterogeneous encoding lifelike genetic algorithm for population evolution modeling and simulation|journal=Soft Computing|year=2014|volume=18|issue=12|pages=2565–2576|doi=10.1007/s00500-014-1401-y|s2cid=29821873}}</ref> This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.
[[Population-based incremental learning]] is a variation where the population as a whole is evolved rather than its individual members.
 
Another important expansion of the Genetic Algorithm (GA) accessible solution space was driven by the need to make representations amenable to variable levels of knowledge about the solution states. Variable-length representations were inspired by the observation that, in nature, evolution tends to progress from simpler organisms to more complex ones—suggesting an underlying rationale for embracing flexible structures.<ref>Goldberg, D.E., Korb, B., & Deb, K. (1989). Messy Genetic Algorithms: Motivation, Analysis, and First Results. Complex Systems, 3(5), 493–530. ISSN 0891-2513.</ref> A second, more pragmatic motivation was that most real-world engineering and knowledge-based problems do not naturally conform to rigid knowledge structures.<ref>Davidor, Y. (1991). Genetic Algorithms and Robotics: A Heuristic Strategy for Optimization. World Scientific Series in Robotics and Intelligent Systems: Volume 1.</ref>
==Problem domains==
Problems which appear to be particularly appropriate for solution by genetic algorithms include [[timetable|timetabling]] and [[scheduling]] problems, and many scheduling software packages are based on GAs. GAs have also been applied to [[engineering]]. Genetic algorithms are often applied as an approach to solve [[global optimization]] problems.
 
These early innovations in variable-length representations laid essential groundwork for the development of [[Genetic programming]], which further extended the classical GA paradigm. Such representations required enhancements to the simplistic genetic operators used for fixed-length chromosomes, enabling the emergence of more sophisticated and adaptive GA models.
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex [[fitness landscape]] as [[recombination]] is designed to move the population away from [[local optima]] that a traditional [[hill climbing]] algorithm might get stuck in.
 
==History= Elitism ===
A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as ''elitist selection'' and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.<ref>{{cite conference |last1=Baluja |first1=Shumeet |first2=Rich |last2=Caruana |title=Removing the genetics from the standard genetic algorithm |conference=[[International Conference on Machine Learning|ICML]] |year=1995 |url=http://www.ri.cmu.edu/pub_files/pub2/baluja_shumeet_1995_1/baluja_shumeet_1995_1.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://www.ri.cmu.edu/pub_files/pub2/baluja_shumeet_1995_1/baluja_shumeet_1995_1.pdf |archive-date=2022-10-09 |url-status=live}}</ref>
 
=== Parallel implementations ===
Computer simulations of evolution started as early as in 1954 with the work of [[Nils Aall Barricelli]], who was using the computer at the [[Institute for Advanced Study]] in [[Princeton, New Jersey]].<ref>{{cite journal|last=Barricelli|first=Nils Aall|year=1954|authorlink=Nils Aall Barricelli|title=Esempi numerici di processi di evoluzione|journal=Methodos|pages=45-68}}</ref> His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist [[Alex Fraser (scientist)|Alex Fraser]] published a series of papers on simulation of [[artificial selection]] of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)<ref>{{cite book|last=Fraser|first=Alex|authorlink=Alex Fraser (scientist)|coauthors=Donald Burnell|year=1970|title=Computer Models in Genetics|publisher=McGraw-Hill|___location=New York}}</ref> and Crosby (1973)<ref>{{cite book|last=Crosby|first=Jack L.|year=1973|title=Computer Simulation in Genetics|publisher=John Wiley & Sons|___location=London}}</ref>. Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, [[Hans Bremermann]] published (a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).<ref>{{cite book|last=Fogel|first=David B. (editor)|year=1998|title=Evolutionary Computation: The Fossil Record|publisher=IEEE Press|___location=New York}}</ref>
[[parallel algorithm|Parallel]] implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
Other variants, like genetic algorithms for [[online optimization]] problems, introduce time-dependence or noise in the fitness function.
 
=== Adaptive GAs ===
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game,<ref>{{cite journal|last=Barricelli|first=Nils Aall|year=1963|title=Numerical testing of evolution theories. Part II. Preliminary tests of performance, symbiogenesis and terrestrial life|journal=Acta Biotheoretica|issue=16|pages=99-126}}</ref> [[artificial evolution]] became a widely recognized optimization method as a result of the work of [[Ingo Rechenberg]] and [[Hans-Paul Schwefel]] in the 1960s and early 1970s - his group was able to solve complex engineering problems through [[evolution strategies]] (1971 PhD thesis and resulting 1973 book). Another approach was the evolutionary programming technique of [[Lawrence J. Fogel]], which was proposed for generating artificial intelligence. [[Evolutionary programming]] originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of [[John Henry Holland|John Holland]] in the early 1970s, and particularly his 1975 book. His work originated with studies of [[cellular automata]], conducted by [[John Henry Holland|Holland]] and his students at the [[University of Michigan]]. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in [[Pittsburgh, Pennsylvania]]. As academic interest grew, the dramatic increase in desktop computational power allowed for practical application of the new technique. In 1989, [[The New York Times]] writer [[John Markoff]] wrote about [[Evolver (software)|Evolver]], the first commercially available desktop genetic algorithm. Custom computer applications began to emerge in a wide variety of fields, and these algorithms are now used by a majority of [[Fortune 500]] companies to solve difficult scheduling, data fitting, trend spotting and budgeting problems, and virtually any other type of [[combinatorial optimization]] problem. Most applications do not use traditional genetic algorithms but a broader set of evolutionary algorithms that incorporate facets of evolution strategies, evolutionary programming, and genetic algorithms.
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Researchers have analyzed GA convergence analytically.<ref>{{Cite journal |last1=Stannat |first1=W. |title=On the convergence of genetic algorithms – a variational approach |journal=Probab. Theory Relat. Fields |volume=129 |pages=113–132 |year=2004 |doi=10.1007/s00440-003-0330-y|s2cid=121086772 |doi-access=free }}</ref><ref>{{Cite journal |last1=Sharapov |first1=R.R. |last2=Lapshin |first2=A.V. |title=Convergence of genetic algorithms |journal=Pattern Recognit. Image Anal. |volume=16 |pages=392–397 |year=2006 |issue=3 |doi=10.1134/S1054661806030084|s2cid=22890010 }}</ref>
 
Instead of using fixed values of ''pc'' and ''pm'', AGAs utilize the population information in each generation and adaptively adjust the ''pc'' and ''pm'' in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm),<ref>{{Cite journal |last1=Srinivas |first1=M. |last2=Patnaik |first2=L. |title=Adaptive probabilities of crossover and mutation in genetic algorithms |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=24 |issue=4 |pages=656–667 |year=1994 |doi=10.1109/21.286385 |url=http://eprints.iisc.ac.in/6971/2/adaptive.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://eprints.iisc.ac.in/6971/2/adaptive.pdf |archive-date=2022-10-09 |url-status=live }}</ref> the adjustment of ''pc'' and ''pm'' depends on the fitness values of the solutions. There are more examples of AGA variants: Successive zooming method is an early example of improving convergence.<ref>{{Cite journal |last1=Kwon |first1=Y.D. |last2=Kwon |first2=S.B. |last3=Jin |first3=S.B. |last4=Kim |first4=J.Y. |title=Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems |journal=Computers & Structures |volume=81 |issue=17 |pages=1715–1725 |year=2003 |doi=10.1016/S0045-7949(03)00183-4}}</ref> In ''CAGA'' (clustering-based adaptive genetic algorithm),<ref>{{cite journal |last1=Zhang |first1=J. |last2=Chung |first2=H. |last3=Lo, W. L. |title=Clustering-Based Adaptive Crossover and Mutation Probabilities for Genetic Algorithms |journal=IEEE Transactions on Evolutionary Computation |volume=11 |issue=3 |pages=326&ndash;335 |year=2007 |doi=10.1109/TEVC.2006.880727 |s2cid=2625150 }}</ref> through the use of clustering analysis to judge the optimization states of the population, the adjustment of ''pc'' and ''pm'' depends on these optimization states. Recent approaches use more abstract variables for deciding ''pc'' and ''pm''. Examples are dominance & co-dominance principles<ref>{{Cite journal |last1=Pavai |first1=G. |last2=Geetha |first2=T.V. |title= New crossover operators using dominance and co-dominance principles for faster convergence of genetic algorithms|journal=Soft Comput |volume=23 |pages=3661–3686 |year=2019 |issue=11 |doi=10.1007/s00500-018-3016-1|s2cid=254028984 }}</ref> and LIGA (levelized interpolative genetic algorithm), which combines a flexible GA with modified A* search to tackle search space anisotropicity.<ref>{{Cite journal |last1=Li |first1=J.C.F. |last2=Zimmerle |first2=D. |last3=Young |first3=P. |title=Flexible networked rural electrification using levelized interpolative genetic algorithm |journal=Energy & AI |volume=10 |pages=100186 |year=2022 |doi=10.1016/j.egyai.2022.100186|s2cid=250972466 |doi-access=free |bibcode=2022EneAI..1000186L }}</ref>
==Related techniques==
* [[Ant colony optimization]] (ACO) uses many ants (or agents) to traverse the solution space and find locally productive areas. While usually inferior to genetic algorithms and other forms of local search, it is able to produce results in problems where no global or up-to-date perspective can be obtained, and thus the other methods cannot be applied.
 
It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as [[Hill climbing|simple hill climbing]]) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA {{Citation needed|date=July 2016}} while overcoming the lack of robustness of hill climbing.
* [[Bacteriologic Algorithms]] (BA) inspired by [[evolutionary ecology]] and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, you can’t find one individual that fits the whole environment. So, you need to reason at the population level. BAs have shown better results than GAs on problems such as complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining.<ref>{{cite journal|url=http://www.irisa.fr/triskell/publis/2005/Baudry05d.pdf|first=Benoit|last=Baudry|coauthors=Franck Fleurey, Jean-Marc Jézéquel, and Yves Le Traon|title=Automatic Test Case Optimization: A Bacteriologic Algorithm|date=March/April 2005|pages=76-82|journal=IEEE Software|publisher=IEEE Computer Society}}</ref>
 
This means that the rules of genetic variation may have a different meaning in the natural case. For instance &ndash; provided that steps are stored in consecutive order &ndash; crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the [[Chromosomal inversion|inversion operator]] has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.<ref>See for instance [http://www.thisurlisfalse.com/evolution-in-a-nutshell/ Evolution-in-a-nutshell] {{Webarchive|url=https://web.archive.org/web/20160415193505/http://www.thisurlisfalse.com/evolution-in-a-nutshell/ |date=15 April 2016 }} or example in [[travelling salesman problem]], in particular the use of an [[edge recombination operator]].</ref>
* [[Cross-entropy method]] The Cross-entropy (CE) method generates candidates solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
 
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.
* [[Evolution strategies]] (ES, see Rechenberg, 1971) evolve individuals by means of mutation and intermediate and discrete recombination. ES algorithms are designed particularly to solve problems in the real-value ___domain. They use self-adaptation to adjust control parameters of the search.
 
A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA,<ref>{{cite journal |url=http://www.complex-systems.com/issues/03-5.html |first1=D. E. |last1=Goldberg |first2=B. |last2=Korb |first3=K. |last3=Deb |title=Messy Genetic Algorithms : Motivation Analysis, and First Results |journal=Complex Systems |volume=5 |issue=3 |pages=493–530 |year=1989 }}</ref> GEMGA<ref>[https://www.osti.gov/servlets/purl/524858 Gene expression: The missing link in evolutionary computation]</ref> and LLGA.<ref>{{cite thesis |last=Harik |first=G. |date=1997 |title=Learning linkage to efficiently solve problems of bounded difficulty using genetic algorithms |type=PhD |publisher=Dept. Computer Science, University of Michigan, Ann Arbour |url=http://portal.acm.org/citation.cfm?id=269517 }}</ref>
* [[Evolutionary programming]] (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents.
 
== Problem domains ==
* [[Extremal optimization]] (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes [[local search (optimization)|local]] modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of ''emergent'' improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions.
Problems which appear to be particularly appropriate for solution by genetic algorithms include [[Genetic algorithm scheduling|timetabling and scheduling problems]], and many scheduling software packages are based on GAs{{Citation needed|date=December 2011}}. GAs have also been applied to [[engineering]].<ref>Tomoiagă B, Chindriş M, Sumper A, Sudria-Andreu A, Villafafila-Robles R. [http://www.mdpi.com/1996-1073/6/3/1439/pdf Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. ] Energies. 2013; 6(3):1439-1455.</ref> Genetic algorithms are often applied as an approach to solve [[global optimization]] problems.
 
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex [[fitness landscape]] as mixing, i.e., [[Mutation (genetic algorithm)|mutation]] in combination with [[Crossover (genetic algorithm)|crossover]], is designed to move the population away from [[local optima]] that a traditional [[hill climbing]] algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide [[ergodicity]] of the overall genetic algorithm process (seen as a [[Markov chain]]).
* [[Gaussian adaptation]] (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information<ref>{{cite journal|last=Kjellström|first=G.|title= On the Efficiency of Gaussian Adaptation|journal=Journal of Optimization Theory and Applications|Volume=71|issue=3|pages=589-597|month=Dec.|year=1991}}</ref>. Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain “ambition” to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder ([[average information]]) of the Gaussian simultaneously keeping the [[mean fitness]] constant.
 
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector,<ref>{{cite web|last=Gross|first=Bill|title=A solar energy system that tracks the sun|url=https://www.ted.com/talks/bill_gross_a_solar_energy_system_that_tracks_the_sun|work=TED|date=2 February 2009 |access-date=20 November 2013}}</ref> antennae designed to pick up radio signals in space,<ref>{{citation |first1=G. S. |last1=Hornby |first2=D. S. |last2=Linden |first3=J. D. |last3=Lohn |url=http://ti.arc.nasa.gov/m/pub-archive/1244h/1244%20(Hornby).pdf |title=Automated Antenna Design with Evolutionary Algorithms}}</ref> walking methods for computer figures,<ref>{{Cite web | url=http://goatstream.com/research/papers/SA2013/index.html | title=Flexible Muscle-Based Locomotion for Bipedal Creatures}}</ref> optimal design of aerodynamic bodies in complex flowfields<ref>{{Cite journal|last1=Evans|first1=B.|last2=Walton|first2=S.P.|date=December 2017|title=Aerodynamic optimisation of a hypersonic reentry vehicle based on solution of the Boltzmann–BGK equation and evolutionary optimisation|journal=Applied Mathematical Modelling|volume=52|pages=215–240|doi=10.1016/j.apm.2017.07.024|issn=0307-904X|url=https://cronfa.swan.ac.uk/Record/cronfa34688|doi-access=free}}</ref>
* [[Genetic programming]] (GP) is a related technique popularized by [[John Koza]] in which computer programs, rather than function parameters, are optimized. Genetic programming often uses [[Tree data structure|tree-based]] internal [[data structure]]s to represent the computer programs for adaptation instead of the [[List (computing)|list]] structures typical of genetic algorithms.
 
In his ''Algorithm Design Manual'', [[Steven Skiena|Skiena]] advises against genetic algorithms for any task:
* [[Grouping Genetic Algorithm]] (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items.<ref name="Falkenauer">{{cite book|last=Falkenauer|first=Emanuel|authorlink=Emanuel Falkenauer|year=1997|title=Genetic Algorithms and Grouping Problems|publisher=John Wiley & Sons Ltd|___location=Chichester, England|id=ISBN 978-0-471-97150-4}}</ref> The idea behind this GA evolution proposed by [[Emanuel Falkenauer]] is that solving some complex problems, a.k.a. ''clustering'' or ''partitioning'' problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include [[Bin packing problem|Bin Packing]], Line Balancing, Clustering w.r.t. a distance measure, Equal Piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For [[Bin packing problem|Bin Packing]] in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.
 
{{blockquote|[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.
* [[Harmony search]] (HS) is an algorithm mimicking musicians behaviors in improvisation process.
[...]
I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to [[simulated annealing]] for your heuristic search voodoo needs.|Steven Skiena<ref>{{cite book |last=Skiena |first=Steven |author-link=Steven Skiena |title = The Algorithm Design Manual |publisher=[[Springer Science+Business Media]] |edition=2nd |year = 2010 |isbn=978-1-849-96720-4}}</ref>{{rp|267}}}}
 
== History ==
* [[Interactive evolutionary algorithm]]s are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.
In 1950, [[Alan Turing]] proposed a "learning machine" which would parallel the principles of evolution.<ref name="mind.oxfordjournals.org">{{cite journal|last1=Turing|first1=Alan M.|title=Computing machinery and intelligence|journal=Mind|volume=LIX|issue=238|pages=433–460|doi=10.1093/mind/LIX.236.433|date=October 1950}}</ref> Computer simulation of evolution started as early as in 1954 with the work of [[Nils Aall Barricelli]], who was using the computer at the [[Institute for Advanced Study]] in [[Princeton, New Jersey]].<ref name="Barricelli 1954 45–68">{{cite journal|last=Barricelli|first=Nils Aall|year=1954|author-link=Nils Aall Barricelli|title=Esempi numerici di processi di evoluzione|journal=Methodos|pages=45–68}}</ref><ref name="Barricelli 1957 143–182">{{cite journal|last=Barricelli|first=Nils Aall|year=1957|author-link=Nils Aall Barricelli|title=Symbiogenetic evolution processes realized by artificial methods|journal=Methodos|pages=143–182}}</ref> His 1954 publication was not widely noticed. Starting in 1957,<ref name="Fraser 1957 484–491">{{cite journal|last=Fraser|first=Alex|author-link=Alex Fraser (scientist)|year=1957|title=Simulation of genetic systems by automatic digital computers. I. Introduction|journal=Aust. J. Biol. Sci.|volume=10|issue=4|pages=484–491|doi=10.1071/BI9570484|doi-access=free}}</ref> the Australian quantitative geneticist [[Alex Fraser (scientist)|Alex Fraser]] published a series of papers on simulation of [[artificial selection]] of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)<ref name="Fraser 1970">{{cite book|last1=Fraser|first1=Alex|author-link=Alex Fraser (scientist)|first2=Donald |last2=Burnell|year=1970|title=Computer Models in Genetics|publisher=McGraw-Hill|___location=New York|isbn=978-0-07-021904-5}}</ref> and Crosby (1973).<ref name="Crosby 1973">{{cite book|last=Crosby|first=Jack L.|year=1973|title=Computer Simulation in Genetics|publisher=John Wiley & Sons|___location=London|isbn=978-0-471-18880-3}}</ref> Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, [[Hans-Joachim Bremermann]] published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms.<ref>[http://berkeley.edu/news/media/releases/96legacy/releases.96/14319.html 02.27.96 - UC Berkeley's Hans Bremermann, professor emeritus and pioneer in mathematical biology, has died at 69]</ref> Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by [[David B. Fogel|Fogel]] (1998).<ref>{{cite book|editor-last=Fogel|editor-first=David B. |year=1998|title=Evolutionary Computation: The Fossil Record|publisher=IEEE Press|___location=New York|isbn=978-0-7803-3481-6}}</ref>
 
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game,<ref>{{cite journal|last=Barricelli|first=Nils Aall|year=1963|title=Numerical testing of evolution theories. Part II. Preliminary tests of performance, symbiogenesis and terrestrial life|journal=Acta Biotheoretica|volume=16|issue=3–4|pages=99–126|doi=10.1007/BF01556602|s2cid=86717105}}</ref> [[artificial evolution]] only became a widely recognized optimization method as a result of the work of [[Ingo Rechenberg]] and [[Hans-Paul Schwefel]] in the 1960s and early 1970s &ndash; Rechenberg's group was able to solve complex engineering problems through [[Evolution strategy|evolution strategies]].<ref>{{cite book|last=Rechenberg|first=Ingo|year=1973|title=Evolutionsstrategie|place=Stuttgart|publisher=Holzmann-Froboog|isbn=978-3-7728-0373-4}}</ref><ref>{{cite book|last=Schwefel|first=Hans-Paul|year=1974|title=Numerische Optimierung von Computer-Modellen (PhD thesis)}}</ref><ref>{{cite book|last=Schwefel|first=Hans-Paul|year=1977|title=Numerische Optimierung von Computor-Modellen mittels der Evolutionsstrategie : mit einer vergleichenden Einführung in die Hill-Climbing- und Zufallsstrategie|place=Basel; Stuttgart | publisher=Birkhäuser| isbn=978-3-7643-0876-6}}</ref><ref>{{cite book|last=Schwefel|first=Hans-Paul|year=1981|title=Numerical optimization of computer models (Translation of 1977 Numerische Optimierung von Computor-Modellen mittels der Evolutionsstrategie|place=Chichester; New York|publisher=Wiley|isbn=978-0-471-09988-8}}</ref> Another approach was the evolutionary programming technique of [[Lawrence J. Fogel]], which was proposed for generating artificial intelligence. [[Evolutionary programming]] originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of [[John Henry Holland|John Holland]] in the early 1970s, and particularly his book ''Adaptation in Natural and Artificial Systems'' (1975). His work originated with studies of [[cellular automata]], conducted by [[John Henry Holland|Holland]] and his students at the [[University of Michigan]]. Holland introduced a formalized framework for predicting the quality of the next generation, known as [[Holland's Schema Theorem]]. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in [[Pittsburgh, Pennsylvania]].
* [[Memetic algorithm]] (MA), also called ''hybrid genetic algorithm'' among others, is a relatively new evolutionary method where local search is applied during the evolutionary cycle. The idea of memetic algorithms comes from [[meme]]s, which–unlike genes–can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms.
 
===Commercial products===
* [[Simulated annealing]] (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule.
In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.<ref>{{Cite book|url=https://books.google.com/books?id=-MszVdu_PAMC&q=general+electric+genetic+algorithm+mainframe|title=An Approach to Designing an Unmanned Helicopter Autopilot Using Genetic Algorithms and Simulated Annealing|last=Aldawoodi|first=Namir|year=2008|isbn=978-0549773498|pages=99|via=Google Books}}</ref>{{Circular reference|date=January 2021}}
In 1989, Axcelis, Inc. released [[Evolver (software)|Evolver]], the world's first commercial GA product for desktop computers. [[The New York Times]] technology writer [[John Markoff]] wrote<ref>{{cite news|last=Markoff|first=John|title=What's the Best Answer? It's Survival of the Fittest|newspaper=New York Times|url=https://www.nytimes.com/1990/08/29/business/business-technology-what-s-the-best-answer-it-s-survival-of-the-fittest.html|access-date=2016-07-13|date=29 August 1990}}</ref> about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995.<ref>Ruggiero, Murray A.. (1 August 2009) [http://www.futuresmag.com/2009/08/01/fifteen-years-and-counting?t=technology&page=2 Fifteen years and counting] {{Webarchive|url=https://web.archive.org/web/20160130054823/http://www.futuresmag.com/2009/08/01/fifteen-years-and-counting?t=technology&page=2 |date=30 January 2016 }}. Futuresmag.com. Retrieved on 2013-08-07.</ref> Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version.<ref>[http://www.palisade.com/evolver/ Evolver: Sophisticated Optimization for Spreadsheets]. Palisade. Retrieved on 2013-08-07.</ref> Since the 1990s, [[MATLAB]] has built in three [[derivative-free optimization]] heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).<ref>{{cite journal | doi=10.1109/ACCESS.2019.2923092 | title=Benchmarks for Evaluating Optimization Algorithms and Benchmarking MATLAB Derivative-Free Optimizers for Practitioners' Rapid Access | year=2019 | last1=Li | first1=Lin | last2=Saldivar | first2=Alfredo Alan Flores | last3=Bai | first3=Yun | last4=Chen | first4=Yi | last5=Liu | first5=Qunfeng | last6=Li | first6=Yun | journal=IEEE Access | volume=7 | pages=79657–79670 | s2cid=195774435 | doi-access=free | bibcode=2019IEEEA...779657L }}</ref>
 
== Related techniques ==
* [[Stochastic optimization]] is an umbrella set of methods that includes GAs and numerous other approaches.
{{See also|List of genetic algorithm applications}}
 
===Parent fields===
* [[Tabu search]] (TS) is similar to Simulated Annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
Genetic algorithms are a sub-field:
*[[Evolutionary algorithms]]
*[[Evolutionary computing]]
*[[Metaheuristic]]s
*[[Stochastic optimization]]
*[[Optimization (mathematics)|Optimization]]
 
===Related fields===
==Building block hypothesis==
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular it is difficult to understand why they are often successful in generating solutions of high fitness. The building block hypothesis (BBH) consists of
 
====Evolutionary algorithms====
# A description of an abstract adaptive mechanism that performs adaptation by recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness.
{{More citations needed section|date=May 2011}}
# A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this abstract adaptive mechanism.
{{main|Evolutionary algorithm}}
Evolutionary algorithms is a sub-field of [[Evolutionary Computation|evolutionary computing]].
 
* [[Evolution strategy|Evolution strategies]] (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value ___domain.<ref>{{cite book|last=Cohoon|first=J|display-authors=etal|title=Evolutionary algorithms for the physical design of VLSI circuits|url= https://www.ifte.de/mitarbeiter/lienig/cohoon.pdf |archive-url=https://ghostarchive.org/archive/20221009/https://www.ifte.de/mitarbeiter/lienig/cohoon.pdf |archive-date=2022-10-09 |url-status=live|journal=Advances in Evolutionary Computing: Theory and Applications|publisher= Springer, pp. 683-712, 2003|isbn=978-3-540-43330-9|year=2002}}</ref> They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy ([[CMA-ES]]).
(Goldberg 1989:41) describes the abstract adaptive mechanism as follows:
* [[Evolutionary programming]] (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents.
* [[Estimation of Distribution Algorithm]] (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled<ref>{{cite book|last1=Pelikan|first1=Martin|last2=Goldberg|first2=David E.|last3=Cantú-Paz|first3=Erick|title=BOA: The Bayesian Optimization Algorithm|journal=Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1|date=1 January 1999|pages=525–532|url=http://dl.acm.org/citation.cfm?id=2933973|isbn=9781558606111|series=Gecco'99}}</ref><ref>{{cite book|last1=Pelikan|first1=Martin|title=Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms|date=2005|publisher=Springer|___location=Berlin [u.a.]|isbn=978-3-540-23774-7|edition=1st}}</ref> or generated from guided-crossover.<ref>{{cite book|last1=Thierens|first1=Dirk|title=Parallel Problem Solving from Nature, PPSN XI |chapter=The Linkage Tree Genetic Algorithm|date=11 September 2010|pages=264–273|doi=10.1007/978-3-642-15844-5_27|language=en|isbn=978-3-642-15843-8}}</ref>
* [[Genetic programming]] (GP) is a related technique popularized by [[John Koza]] in which computer programs, rather than function parameters, are optimized. Genetic programming often uses [[Tree (data structure)|tree-based]] internal [[data structure]]s to represent the computer programs for adaptation instead of the [[List (computing)|list]] structures typical of genetic algorithms. There are many variants of Genetic Programming, including [[Cartesian genetic programming]], [[Gene expression programming]],<ref>{{cite journal|last=Ferreira|first=C|title=Gene Expression Programming: A New Adaptive Algorithm for Solving Problems|url= http://www.gene-expression-programming.com/webpapers/GEP.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://www.gene-expression-programming.com/webpapers/GEP.pdf |archive-date=2022-10-09 |url-status=live|journal=Complex Systems |year=2001|volume=13 |issue=2 |pages=87–129|arxiv=cs/0102027|bibcode=2001cs........2027F}}</ref> [[grammatical evolution]], [[Linear genetic programming]], [[Multi expression programming]] etc.
* [[Grouping genetic algorithm]] (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items.<ref name="Falkenauer">{{cite book|last=Falkenauer|first=Emanuel|author-link=Emanuel Falkenauer|year=1997|title=Genetic Algorithms and Grouping Problems|publisher=John Wiley & Sons Ltd|___location=Chichester, England|isbn=978-0-471-97150-4}}</ref> The idea behind this GA evolution proposed by [[Emanuel Falkenauer]] is that solving some complex problems, a.k.a. ''clustering'' or ''partitioning'' problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include [[bin packing problem|bin packing]], line balancing, [[cluster analysis|clustering]] with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.
* [[Interactive evolutionary algorithm]]s are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.
 
====Swarm intelligence====
:Short, low order, and highly fit [[schema]]ta are sampled, [[crossover (genetic algorithm)|recombined]] [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
{{main|Swarm intelligence}}
Swarm intelligence is a sub-field of [[Evolutionary Computation|evolutionary computing]].
 
* [[Ant colony optimization]] ('''ACO''') uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas.
:Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood [building blocks], so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks.
*Although considered an [[Estimation of distribution algorithm]],<ref>{{cite journal|last1=Zlochin|first1=Mark|last2=Birattari|first2=Mauro|last3=Meuleau|first3=Nicolas|last4=Dorigo|first4=Marco|title=Model-Based Search for Combinatorial Optimization: A Critical Survey|journal=Annals of Operations Research|date=1 October 2004|volume=131|issue=1–4|pages=373–395|doi=10.1023/B:ANOR.0000039526.52305.af|language=en|issn=0254-5330|citeseerx=10.1.1.3.427|s2cid=63137}}</ref> [[Particle swarm optimization]] (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables.<ref>Rania Hassan, Babak Cohanim, Olivier de Weck, Gerhard Vente
r (2005) [https://www.mit.edu/~deweck/PDF_archive/3%20Refereed%20Conference/3_50_AIAA-2005-1897.pdf A comparison of particle swarm optimization and the genetic algorithm]</ref>
 
====Other evolutionary computing algorithms====
(Goldberg 1989) claims that the building block hypothesis is supported by [[Holland's schema theorem]].
 
Evolutionary computation is a sub-field of the [[metaheuristic]] methods.
The building block hypothesis has been sharply criticized on the grounds that it lacks theoretical justification and experimental results have been published that draw its veracity into question. On the theoretical side, for example, Wright et al. state that
:"The various claims about GAs that are traditionally made under the name of the ''building block hypothesis'' have, to date, no basis in theory and, in some cases, are simply incoherent"<ref>{{cite conference|last=Wright|first=A.H.|coauthors=et al.|year=2003|title=Implicit Parallelism|booktitle=Proceedings of the Genetic and Evolutionary Computation Conference}}</ref>
 
* [[Memetic algorithm]] (MA), often called ''hybrid genetic algorithm'' among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from [[meme]]s, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms.
On the experimental side uniform crossover was seen to outperform one-point and two-point crossover on many of the fitness functions studied by Syswerda.<ref>{{cite conference|last=Syswerda|first=G.|year=1989|title=Uniform crossover in genetic algorithms|editor=J. D. Schaffer|booktitle=Proceedings of the Third International Conference on Genetic Algorithms|publisher=Morgan Kaufmann}}</ref> Summarizing these results, Fogel remarks that
* [[Bacteriologic algorithm]]s (BA) inspired by [[evolutionary ecology]] and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining.<ref>{{cite journal|url=http://www.irisa.fr/triskell/publis/2005/Baudry05d.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://www.irisa.fr/triskell/publis/2005/Baudry05d.pdf |archive-date=2022-10-09 |url-status=live|first=Benoit|last=Baudry |author2=Franck Fleurey |author3-link=Jean-Marc Jézéquel|author3=Jean-Marc Jézéquel |author4=Yves Le Traon|title=Automatic Test Case Optimization: A Bacteriologic Algorithm|date=March–April 2005|pages=76–82|journal=IEEE Software|issue=2|doi=10.1109/MS.2005.30|volume=22|s2cid=3559602|access-date=9 August 2009}}</ref>
* [[Cultural algorithm]] (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space.
* [[Differential evolution]] (DE) inspired by migration of superorganisms.<ref>{{cite journal|last=Civicioglu|first=P.|title=Transforming Geocentric Cartesian Coordinates to Geodetic Coordinates by Using Differential Search Algorithm|journal=Computers &Geosciences|year=2012|volume=46|pages=229–247|doi=10.1016/j.cageo.2011.12.011|bibcode=2012CG.....46..229C}}</ref>
* [[Gaussian adaptation]] (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information.<ref>{{cite journal|last=Kjellström|first=G.|title= On the Efficiency of Gaussian Adaptation|journal=Journal of Optimization Theory and Applications|volume=71|issue=3|pages=589–597|date=December 1991|doi= 10.1007/BF00941405|s2cid=116847975}}</ref> Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder ([[average information]]) of the Gaussian simultaneously keeping the [[mean fitness]] constant.
 
====Other metaheuristic methods====
:"Generally, uniform crossover yielded better performance than two-point crossover, which in turn yielded better performance than one-point crossover"<ref>{{cite book|last=Fogel|first=David B.|year=2000|title=Evolutionary Computation: Towards a New Philosophy of Machine Intelligence|publisher=IEEE Press|___location=New York|pages=140}}</ref>
 
Metaheuristic methods broadly fall within [[Stochastic optimization|stochastic]] optimisation methods.
Syswerda's results contradict the building block hypothesis because uniform crossover is extremely disruptive of short schemata whereas one and two-point crossover are more likely to conserve short schemata and combine their defining bits in children produced during recombination.
 
* [[Simulated annealing]] (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule.
The debate over the building block hypothesis demonstrates that the issue of how GAs "work", (i.e. perform adaptation) is currently far from settled.
* [[Tabu search]] (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
* [[Extremal optimization]] (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes [[local search (optimization)|local]] modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of ''emergent'' improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions.
 
====Other stochastic optimisation methods====
==Applications==<!-- This section is linked from [[Genetic algorithm]] -->
* [[Artificial Creativity]]
* [[Automated]] design, including research on [[composite material]] design and [[multi-objective]] design of automotive components for [[crashworthiness]], weight savings, and other characteristics.
* Automated design of [[mechatronics|mechatronic]] systems using [[bond graphs]] and [[genetic programming]] (NSF).
* Automated design of industrial equipment using catalogs of exemplar lever patterns.
* Calculation of [[Bound state]]s and [[Local-density approximation]]s.
* Chemical kinetics ([http://www.personal.leeds.ac.uk/~fuensm/project.html gas] and [http://repositories.cdlib.org/postprints/1154 solid] phases)
* Configuration applications, particularly physics applications of optimal molecule configurations for particular systems like C60 ([[Fullerene|buckyballs]]).
* Container loading optimization.
* [[Code-breaking]], using the GA to search large solution spaces of [[cipher]]s for the one correct decryption.
* Design of [[water distribution systems]].
* [[Distributed computer network]] [[topologies]].
* Electronic circuit design, known as [[Evolvable hardware]].
* File allocation for a [[distributed system]].
* [[Parallelization]] of GAs/GPs including use of [[hierarchical decomposition]] of [[problem domains]] and design spaces [[nesting]] of irregular shapes using [[feature matching]] and GAs.
* [[Game Theory]] Equilibrium Resolution.
* Learning [[Robot]] behavior using Genetic Algorithms.
* Learning fuzzy rule base using genetic algorithms.
* Linguistic analysis, including [[Grammar Induction]] and other aspects of [[Natural Language Processing]] (NLP) such as word sense disambiguation.
* Mobile communications infrastructure [[Optimization (mathematics)|optimization]].
* Molecular Structure Optimization (Chemistry).
* Multiple population [[topologies]] and interchange [[methodologies]].
* Optimisation of data compression systems, for example using wavelets.
* [[Protein folding]] and protein/[[ligand docking]].
* [[Plant floor layout]].
* Representing rational agents in economic models such as the [[cobweb model]].
* Scheduling applications, including job-shop scheduling. The objective being to schedule jobs in a [[sequence dependent]] or non-sequence dependent setup environment in order to maximize the volume of production while minimizing penalties such as tardiness.
* Selection of optimal mathematical model to describe biological systems.
* [[Software engineering]]
* Solving the machine-component grouping problem required for [[cellular manufacturing]] systems.
* [[Tactical asset]] [[allocation]] and [[international equity]] strategies.
* [[Timetable|Timetabling]] problems, such as designing a non-conflicting class timetable for a large university.
* Training [[artificial neural networks]] when pre-classified training examples are not readily obtainable ([[neuroevolution]]).
* [[Traveling Salesman]] Problem.
 
* The [[Cross-entropy method|cross-entropy (CE) method]] generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
==References==
* Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular [[reinforcement learning]], [[Active learning (machine learning)|active or query learning]], [[Artificial neural network|neural networks]], and [[metaheuristics]].
{{reflist}}
 
{{refbegin}}
==See also==
*{{cite journal|last=Bies|first=Robert R|coauthors=Muldoon, Matthew F; Pollock, Bruce G; Manuck, Steven; Smith, Gwenn and Sale, Mark E|year=2006|title=A Genetic Algorithm-Based, Hybrid Machine Learning Approach to Model Selection|journal=Journal of Pharmacokinetics and Pharmacodynamics|publisher=Springer|___location=Netherlands|pages=196-221}}
* [[Genetic programming]]
*{{cite web|last=Fentress|first=Sam W|year=2005|title=Exaptation as a means of evolving complex solutions|publisher=MA Thesis, University of Edinburgh|url=http://www.inf.ed.ac.uk/publications/thesis/online/IM050329.pdf}}
* [[List of genetic algorithm applications]]
*{{cite journal|last=Fraser|first=Alex S.|year=1957|title=Simulation of Genetic Systems by Automatic Digital Computers. I. Introduction|journal=Australian Journal of Biological Sciences|volume=10|pages=484-491}}
* [[particle filter|Genetic algorithms in signal processing (a.k.a. particle filters)]]
*Goldberg, David E (1989), ''Genetic Algorithms in Search, Optimization and Machine Learning,'' Kluwer Academic Publishers, Boston, MA.
* [[Propagation of schema]]
*Goldberg, David E (2002), ''The Design of Innovation: Lessons from and for Competent Genetic Algorithms,'' Addison-Wesley, Reading, MA.
* [[Universal Darwinism]]
*Fogel, David B (2006), ''Evolutionary Computation: Toward a New Philosophy of Machine Intelligence,'' IEEE Press, Piscataway, NJ. Third Edition
* [[Metaheuristics]]
*Holland, John H (1975), ''Adaptation in Natural and Artificial Systems'', University of Michigan Press, Ann Arbor
* [[Learning classifier system]]
*Koza, John (1992), ''Genetic Programming: On the Programming of Computers by Means of Natural Selection'', [[MIT Press]]. ISBN 0-262-11170-5
* [[Rule-based machine learning]]
*Michalewicz, Zbigniew (1999), ''Genetic Algorithms + Data Structures = Evolution Programs'', Springer-Verlag.
 
*Mitchell, Melanie, (1996), ''An Introduction to Genetic Algorithms'', MIT Press, Cambridge, MA.
== References ==
*Rechenberg, Ingo (1971): Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Fromman-Holzboog (1973).
{{Reflist|30em}}
*Schmitt, Lothar M, Nehaniv Chrystopher N, Fujii Robert H (1998), ''Linear analysis of genetic algorithms'', Theoretical Computer Science (208), pp. 111-148
 
*Schmitt, Lothar M (2001), ''Theory of Genetic Algorithms'', Theoretical Computer Science (259), pp. 1-61
== Bibliography ==
*Schmitt, Lothar M (2004), ''Theory of Genetic Algorithms II: models for genetic operators over the string-tensor representation of populations and convergence to global optima for arbitrary fitness function under scaling'', Theoretical Computer Science (310), pp. 181-231
{{Refbegin}}
* {{Cite book | title=Genetic Programming &ndash; An Introduction | last1=Banzhaf | first1=Wolfgang | last2=Nordin | first2=Peter | last3=Keller | first3=Robert | last4=Francone | first4=Frank | year=1998 | isbn=978-1558605107 | publisher=Morgan Kaufmann | ___location=San Francisco, CA | url-access=registration | url=https://archive.org/details/geneticprogrammi00wolf }}
* {{Cite journal|last1=Bies|first1=Robert R. |last2=Muldoon |first2=Matthew F. |last3=Pollock |first3=Bruce G. |last4=Manuck |first4=Steven |last5=Smith |first5=Gwenn |last6=Sale |first6=Mark E. |year=2006|title=A Genetic Algorithm-Based, Hybrid Machine Learning Approach to Model Selection|journal=Journal of Pharmacokinetics and Pharmacodynamics|volume=33 |issue=2 |pages=196–221 |pmid=16565924 |doi=10.1007/s10928-006-9004-6 |s2cid=39571129 }}
* {{Cite journal|last1=Cha|first1=Sung-Hyuk|last2=Tappert |first2=Charles C. |year=2009|title=A Genetic Algorithm for Constructing Compact Binary Decision Trees|journal=Journal of Pattern Recognition Research |volume=4|issue=1|pages=1–13|doi=10.13176/11.44|citeseerx=10.1.1.154.8314 }}
* {{Cite book | last1=Eiben | first1=Agoston | last2=Smith | first2=James | year=2003 | title=Introduction to Evolutionary Computing | publisher=Springer | isbn=978-3540401841 }}
* {{Cite journal|last=Fraser|first=Alex S.|year=1957|title=Simulation of Genetic Systems by Automatic Digital Computers. I. Introduction|journal=Australian Journal of Biological Sciences|volume=10|issue=4|pages=484–491|doi=10.1071/BI9570484 |doi-access=free}}
* {{Cite book| last=Goldberg | first=David | year=1989 | title=Genetic Algorithms in Search, Optimization and Machine Learning | isbn=978-0201157673 | publisher=Addison-Wesley Professional | ___location=Reading, MA }}
* {{Cite book| last=Goldberg | first=David | year=2002 | title=The Design of Innovation: Lessons from and for Competent Genetic Algorithms | publisher=Kluwer Academic Publishers | ___location=Norwell, MA | isbn=978-1402070983 }}
* {{Cite book| last=Fogel | first=David | title=Evolutionary Computation: Toward a New Philosophy of Machine Intelligence | publisher=IEEE Press | ___location=Piscataway, NJ | edition=3rd | isbn=978-0471669517 | year=2006 }}
* {{Cite book | last1=Hingston | first1=Philip | last2=Barone | first2=Luigi | last3=Michalewicz | first3=Zbigniew | title=Design by Evolution: Advances in Evolutionary Design | year=2008 | publisher=Springer | isbn=978-3540741091 }}
* {{Cite book | last=Holland | first=John | title=Adaptation in Natural and Artificial Systems | publisher=MIT Press | ___location=Cambridge, MA | year=1992 | isbn=978-0262581110 | url-access=registration | url=https://archive.org/details/adaptationinnatu00holl }}
* {{Cite book | last=Koza | first=John | title=Genetic Programming: On the Programming of Computers by Means of Natural Selection | year = 1992 | publisher=MIT Press | ___location=Cambridge, MA | isbn=978-0262111706 }}
* {{Cite book | last=Michalewicz | first=Zbigniew | year=1996 | title=Genetic Algorithms + Data Structures = Evolution Programs | publisher=Springer-Verlag | isbn=978-3540606765 }}
* {{Cite book | last=Mitchell | first=Melanie | title=An Introduction to Genetic Algorithms | year=1996 | publisher=MIT Press | ___location=Cambridge, MA | isbn = 9780585030944 }}
* {{Cite book |last1=Poli |first1=R. |last2=Langdon |first2=W. B. |last3=McPhee |first3=N. F. |year=2008 |title=A Field Guide to Genetic Programming | publisher=Lulu.com, freely available from the internet | isbn = 978-1-4092-0073-4 }}{{self-published inline|date=February 2020}}
* Rechenberg, Ingo (1994): Evolutionsstrategie '94, Stuttgart: Fromman-Holzboog.
* {{cite journal |last1=Schmitt |first1=Lothar M. |last2=Nehaniv |first2=Chrystopher L. |last3=Fujii |first3=Robert H. |date=1998 |url=https://www.sciencedirect.com/science/article/pii/S0304397598000048/pdf?md5=28a658a4dc5aef635bbf3c8560129925&pid=1-s2.0-S0304397598000048-main.pdf&_valck=1 |title=Linear analysis of genetic algorithms |journal=Theoretical Computer Science |volume=208 |pages=111&ndash;148 }}
* {{cite journal |last1=Schmitt |first1=Lothar M. |date=2001 |title=Theory of Genetic Algorithms |journal=Theoretical Computer Science |volume=259 |issue=1–2 |pages=1&ndash;61 |doi=10.1016/S0304-3975(00)00406-0 |doi-access=free }}
* {{cite journal |last1=Schmitt |first1=Lothar M. |date=2004 |title=Theory of Genetic Algorithms II: models for genetic operators over the string-tensor representation of populations and convergence to global optima for arbitrary fitness function under scaling |journal=Theoretical Computer Science |volume=310 |issue=1–3 |pages=181&ndash;231 |doi=10.1016/S0304-3975(03)00393-1 |doi-access=free }}
* Schwefel, Hans-Paul (1974): Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977).
* {{Cite book | last=Vose, | first=Michael D| (year=1999), ''| title=The Simple Genetic Algorithm: Foundations and Theory'', | publisher=MIT Press, | ___location=Cambridge, MA | isbn=978-0262220583 | url-access=registration | url=https://archive.org/details/TheSimpleG_00_Vose }}
* {{Cite journal | last=Whitley | first=Darrell | title=A genetic algorithm tutorial | journal=Statistics and Computing | doi=10.1007/BF00175354 | volume=4 | issue=2 | pages=65–85 | year=1994 | url=http://cobweb.cs.uga.edu/~potter/CompIntell/ga_tutorial.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://cobweb.cs.uga.edu/~potter/CompIntell/ga_tutorial.pdf |archive-date=2022-10-09 |url-status=live | citeseerx=10.1.1.184.3999 | s2cid=3447126 }}<!--| access-date=5 January 2013-->
*Whitley, D. (1994). ''A genetic algorithm tutorial''. Statistics and Computing 4, 65–85.
{{refendRefend}}
 
== External links ==
*[http://fog.neopages.org/tutorials.php Hello World in Genetic Algorithms] A Practical Tutorial on Genetic Algorithms, programming one step by step.
 
=== Resources ===
*[http://samizdat.mines.edu/ga_tutorial/ga_tutorial.ps A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University] An excellent tutorial with lots of theory
* [https://web.archive.org/web/20160303215222/http://www.geneticprogramming.com/ga/index.htm] Provides a list of resources in the genetic algorithms field
* [https://www.staracle.com/general/evolutionaryAlgorithms.php An Overview of the History and Flavors of Evolutionary Algorithms]
 
=== Tutorials ===
*[http://www.hao.ucar.edu/Public/models/pikaia/pikaia.html A Fortran code (PIKAIA) with a tutorial by Paul Charbonneau and Barry Knapp, National Center for Atmospheric Research.] An excellent tutorial and a versatile public ___domain code. PIKAIA is also available in [http://www.ecy.wa.gov/programs/eap/models.html a version for Microsoft Excel], as well as [http://whitedwarf.org/index.html?parallel/&0 a parallel processing version].
* [https://www2.econ.iastate.edu/tesfatsi/holland.gaintro.htm Genetic Algorithms - Computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand] An excellent introduction to GA by John Holland and with an application to the Prisoner's Dilemma
* [http://www.i4ai.org/EA-demo/ An online interactive Genetic Algorithm tutorial for a reader to practise or learn how a GA works]: Learn step by step or watch global convergence in batch, change the population size, crossover rates/bounds, mutation rates/bounds and selection mechanisms, and add constraints.
* [https://web.archive.org/web/20130615042000/http://samizdat.mines.edu/ga_tutorial/ga_tutorial.ps A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University] An excellent tutorial with much theory
* [http://cs.gmu.edu/~sean/book/metaheuristics/ "Essentials of Metaheuristics"], 2009 (225 p). Free open text by Sean Luke.
* [http://www.it-weise.de/projects/book.pdf Global Optimization Algorithms &ndash; Theory and Application] {{Webarchive|url=https://web.archive.org/web/20080911075107/http://www.it-weise.de/projects/book.pdf |date=11 September 2008 }}
* [https://mpatacchiola.github.io/blog/2017/03/14/dissecting-reinforcement-learning-5.html Genetic Algorithms in Python] Tutorial with the intuition behind GAs and Python implementation.
* [http://www-personal.umich.edu/~axe/research/Evolving.pdf Genetic Algorithms evolves to solve the prisoner's dilemma.] Written by Robert Axelrod.
 
{{Evolutionary computation}}
{{Authority control}}
 
{{DEFAULTSORT:Genetic Algorithm}}
[[Category:Artificial intelligence]]
[[Category:CyberneticsGenetic algorithms| ]]
[[Category:Evolutionary algorithms]]
[[Category:Intelligence]]
[[Category:Genetic algorithms|*]]
[[Category:Optimization algorithms]]
[[Category:Search algorithms]]
[[Category:Cybernetics]]
 
[[sv:Genetisk programmering#Genetisk algoritm]]
[[ar:خوارزميات وراثية]]
[[ca:Algorisme genètic]]
[[cs:Genetický algoritmus]]
[[da:Genetisk algoritme]]
[[de:Genetischer Algorithmus]]
[[el:Γενετικός Αλγόριθμος]]
[[es:Algoritmo genético]]
[[fa:الگوریتم ژنتیک]]
[[fr:Algorithme génétique]]
[[gl:Algoritmo xenético]]
[[id:Algoritma genetik]]
[[it:Algoritmo genetico]]
[[he:אלגוריתמים גנטיים]]
[[lt:Genetiniai algoritmai]]
[[nl:Genetisch algoritme]]
[[ja:遺伝的アルゴリズム]]
[[pl:Algorytm genetyczny]]
[[pt:Algoritmo genético]]
[[ru:Генетический алгоритм]]
[[fi:Geneettinen algoritmi]]
[[sv:Genetisk algoritm]]
[[tr:Genetik Algoritmalar]]
[[th:ขั้นตอนวิธีเชิงพันธุกรรม]]
[[vi:Giải thuật di truyền]]
[[zh:遗传算法]]