Content deleted Content added
Shinkolobwe (talk | contribs) Minor copyedit + cleanup of the source code of the page |
→Coherent extrapolated volition: Adding main hatnote |
||
(One intermediate revision by one other user not shown) | |||
Line 2:
{{Use mdy dates|date=October 2023}}
{{Artificial intelligence|Philosophy}}
'''Friendly artificial intelligence''' ('''friendly AI''' or '''FAI''') is hypothetical [[artificial general intelligence]] (AGI) that would have a positive (benign) effect on humanity or at least [[AI alignment|align]] with human interests
== Etymology and usage ==
Line 31:
== Coherent extrapolated volition ==
{{Main|Coherent extrapolated volition}}
Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted".<ref name=cevpaper />
|