Friendly artificial intelligence: Difference between revisions

Content deleted Content added
Minor copyedit + cleanup of the source code of the page
Coherent extrapolated volition: Adding main hatnote
 
(One intermediate revision by one other user not shown)
Line 2:
{{Use mdy dates|date=October 2023}}
{{Artificial intelligence|Philosophy}}
'''Friendly artificial intelligence''' ('''friendly AI''' or '''FAI''') is hypothetical [[artificial general intelligence]] (AGI) that would have a positive (benign) effect on humanity or at least [[AI alignment|align]] with human interests orsuch contribute toas fostering the improvement of the human species. It is a part of the [[ethics of artificial intelligence]] and is closely related to [[machine ethics]]. While machine ethics is concerned with how an artificially intelligent agent ''should'' behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
 
== Etymology and usage ==
Line 31:
 
== Coherent extrapolated volition ==
{{Main|Coherent extrapolated volition}}
Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted".<ref name=cevpaper />