Content deleted Content added
change source to syntaxhighlight |
|||
(36 intermediate revisions by 20 users not shown) | |||
Line 1:
In [[
{{Citation
| last1 = Karray | first1 = Fakhreddine O.
| last2 = de Silva | first2 = Clarence
| title = Soft computing and intelligent systems design
|
| publisher = Addison Wesley
| isbn = 0-321-11617-8}}</ref> The algorithm is proposed by Shumeet Baluja in 1994. The algorithm is simpler than a standard genetic algorithm, and in many cases leads to better results than a standard genetic algorithm.<ref name="Baluja1">{{Citation
| last1 = Baluja | first1 = Shumeet
| title = Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning
| year = 1994
| periodical = Technical Report
| publisher = Carnegie Mellon University
| place = Pittsburgh, PA
| issue = CMU–CS–94–163
| citeseerx = 10.1.1.61.8554
}}</ref><ref name="Baluja2">{{Citation
| last1 = Baluja | first1 = Shumeet
| last2 = Caruana | first2 = Rich
| title = Removing the Genetics from the Standard Genetic Algorithm
| year = 1995
| publisher = Morgan Kaufmann Publishers
| pages = 38–46
| citeseerx = 10.1.1.44.5424
}}</ref><ref name="Baluja3">{{Citation
| last1 = Baluja | first1 = Shumeet
| title = An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics
| year = 1995
| publisher =
| pages =
| citeseerx = 10.1.1.43.1108
}}</ref>
== Algorithm ==
In PBIL, genes are represented as real values in the range [0,1], indicating the probability that any particular [[allele]] appears in that [[gene]].
The PBIL algorithm is as follows:
# A population is generated from the probability vector.
# The fitness of each member is evaluated and ranked.
# Update population genotype (probability vector) based on fittest individual.
# Mutate.
# Repeat steps 1–4
== Source code ==
This is a part of source code implemented in [[Java (programming language)|Java]]. In the paper, learnRate = 0.1, negLearnRate = 0.075, mutProb = 0.02, and mutShift = 0.05 is used. N = 100 and ITER_COUNT = 1000 is enough for a small problem.
<syntaxhighlight lang="java">
public void optimize() {
final int totalBits = getTotalBits();
final double[] probVec = new double[totalBits];
Arrays.fill(probVec, 0.5);
bestCost = POSITIVE_INFINITY;
for (int i = 0; i < ITER_COUNT; i++) {
// Creates N genes
final boolean[][] genes = new [N][totalBits];
for (boolean[] gene : genes) {
for (int k = 0; k < gene.length; k++) {
if (rand_nextDouble() < probVec[k])
gene[k] = true;
}
}
// Calculate costs
final double[] costs = new double[N];
for (int j = 0; j < N; j++) {
costs[j] = costFunc.cost(toRealVec(genes[j], domains));
}
// Find min and max cost genes
boolean[] minGene = null, maxGene = null;
double minCost = POSITIVE_INFINITY, maxCost = NEGATIVE_INFINITY;
for (int j = 0; j < N; j++) {
double cost = costs[j];
if (minCost > cost) {
minCost = cost;
minGene = genes[j];
}
if (maxCost < cost) {
maxCost = cost;
maxGene = genes[j];
}
}
// Compare with the best cost gene
if (bestCost > minCost) {
bestCost = minCost;
bestGene = minGene;
}
// Update the probability vector with max and min cost genes
for (int j = 0; j < totalBits; j++) {
if (minGene[j] == maxGene[j]) {
probVec[j] = probVec[j] * (1d - learnRate) +
(minGene[j] ? 1d : 0d) * learnRate;
} else {
final double learnRate2 = learnRate + negLearnRate;
probVec[j] = probVec[j] * (1d - learnRate2) +
(minGene[j] ? 1d : 0d) * learnRate2;
}
}
// Mutation
for (int j = 0; j < totalBits; j++) {
if (rand.nextDouble() < mutProb) {
probVec[j] = probVec[j] * (1d - mutShift) +
(rand.nextBoolean() ? 1d : 0d) * mutShift;
}
}
}
}
</syntaxhighlight>
==See also==
* [[Estimation of
* [[Learning classifier system|Learning Classifier System]] (LCS)
== References ==
<references/>
[[Category:
[[Category:Articles with example Java code]]
|