Content deleted Content added
Line 30:
==Coherent extrapolated volition==
Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that
Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study [[human nature]] and then produce the AI which humanity would want, given sufficient time and insight, to arrive at a satisfactory answer.<ref name=cevpaper>{{cite web |url=https://intelligence.org/files/CEV.pdf |title=Coherent Extrapolated Volition |publisher=Singularity Institute for Artificial Intelligence |year=2004 |access-date=2015-09-12 |author=Eliezer Yudkowsky |archive-date=2015-09-30 |archive-url=https://web.archive.org/web/20150930035316/http://intelligence.org/files/CEV.pdf |url-status=live }}</ref> The appeal to an [[evolutionary psychology|objective through contingent human nature]] (perhaps expressed, for mathematical purposes, in the form of a [[utility function]] or other [[decision theory|decision-theoretic]] formalism), as providing the ultimate criterion of "Friendliness", is an answer to the [[metaethics|meta-ethical]] problem of defining an [[moral universalism|objective morality]]; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.
|