Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.8.6 |
Wingman4l7 (talk | contribs) |
||
Line 36:
==Other approaches==
{{See also|AI control problem#Alignment}}
[[Steve Omohundro]] has proposed a "scaffolding" approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation.<ref name=Hendry2014>{{cite news|last1=Hendry|first1=Erica R.|title=What Happens When Artificial Intelligence Turns On Us?|url=http://www.smithsonianmag.com/innovation/what-happens-when-artificial-intelligence-turns-us-180949415/|access-date=15 July 2014|
[[Seth Baum]] argues that the development of safe, socially beneficial artificial intelligence or artificial general intelligence is a function of the social psychology of AI research communities, and so can be constrained by extrinsic measures and motivated by intrinsic measures. Intrinsic motivations can be strengthened when messages resonate with AI developers; Baum argues that, in contrast, "existing messages about beneficial AI are not always framed well". Baum advocates for "cooperative relationships, and positive framing of AI researchers" and cautions against characterizing AI researchers as "not want(ing) to pursue beneficial designs".<ref>{{Cite journal|last=Baum|first=Seth D.|date=2016-09-28|title=On the promotion of safe and socially beneficial artificial intelligence|journal=AI & Society|volume=32|issue=4|pages=543–551|doi=10.1007/s00146-016-0677-0|s2cid=29012168|issn=0951-5666}}</ref>
|