Content deleted Content added
m Signing comment by 74.194.214.87 - "→Minor Optimization Methods: new section" |
→Questioning the no assumption statement: new section |
||
Line 35:
Lets use wikipedia to share the knowledge not to highlight our works. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/74.194.214.87|74.194.214.87]] ([[User talk:74.194.214.87|talk]]) 22:54, 23 November 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
== Questioning the no assumption statement ==
The statement that I am talking about is in the second para. I am finishing up a Ph.D in EC and AI with specific focus on the biases in optimization algorithms. I do not believe evolutionary algorithms make no assumptions or have no biases about the search space. In fact, as per the No free lunch theorem, it would be impossible for an algorithm to outperform the extremely rudimentary random sampling algorithm if the biases are useless. Moreover, only this random sampling is an algorithm with no biases.
The core bias in evolutionary computation is the binary search space which is typical. Without this bias, the search space will simply be a set of numbers with no relationships between them. The binary search space introduces the bias of correlation between fitness and hamming distance, whether true or not. This is why problems for which this is true such as Gen 1-MAX are easy with O(n log n) for EAs (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.5599). Another way of saying this is that the fact that you talk about a fitness landscape at all is a bias. If you truly had no bias, there would not be any landscape to speak of. It would just be a S -> R map.
|