Human-based evolutionary computation: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: title, template type, journal. Add: s2cid, isbn, chapter. Removed parameters. | Use this bot. Report bugs. | #UCB_CommandLine
Citation bot (talk | contribs)
Alter: title, template type. Add: chapter. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox3 | #UCB_webform_linked 903/2306
 
Line 30:
===Human-based selection strategy===
 
Human-based selection strategy is a simplest human-based evolutionary computation procedure. It is used heavily today by websites outsourcing collection and selection of the content to humans (user-contributed content). Viewed as evolutionary computation, their mechanism supports two operations: initialization (when a user adds a new item) and selection (when a user expresses preference among items). The website software aggregates the preferences to compute the fitness of items so that it can promote the fittest items and discard the worst ones. Several methods of human-based selection were analytically compared in studies by Kosorukoff<ref name="kosorukoff2000">{{cite book |last1=Kosorukoff |first1=A. |title=2001 IEEE International Conference on Systems, Man and Cybernetics. E-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236) |chapter=Human based genetic algorithm |date=2001 |volume=5 |pages=3464–3469 |doi=10.1109/ICSMC.2001.972056|isbn=0-7803-7087-2 |s2cid=13839604 }}</ref> and Gentry.<ref name="gentry2005">{{cite journalbook |last1=Gentry |first1=Craig |last2=Ramzan |first2=Zulfikar |last3=Stubblebine |first3=Stuart |title=Secure distributed human computation |journal=Proceedings of the 6th ACM Conferenceconference on Electronic Commercecommerce -|chapter=Secure ECdistributed '05human computation |date=2005 |pages=155–164 |doi=10.1145/1064009.1064026|isbn=1595930493 |s2cid=56469 }}</ref>
 
Because the concept seems too simple, most of the websites implementing the idea can't avoid the common pitfall: [[informational cascade]] in soliciting human preference. For example, [[digg]]-style implementations, pervasive on the web, heavily bias subsequent human evaluations by prior ones by showing how many votes the items already have. This makes the aggregated evaluation depend on a very small initial sample of rarely independent evaluations. This encourages many people to [[game the system]] that might add to digg's popularity but detract from the quality of the featured results. It is too easy to submit evaluation in digg-style system based only on the content title, without reading the actual content supposed to be evaluated.