Friendly artificial intelligence: Difference between revisions

Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.8.6
Line 23:
[[Steve Omohundro]] says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of [[Instrumental convergence#Basic AI drives|basic "drives"]], such as resource acquisition, self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.<ref>{{cite journal |last=Omohundro |first=S. M. |date=February 2008 |title=The basic AI drives |journal=Artificial General Intelligence |volume=171 |pages=483–492 |url=https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.393.8356&rep=rep1&type=pdf |citeseerx=10.1.1.393.8356}}</ref><ref>{{cite book|last1=Bostrom|first1=Nick|title=Superintelligence: Paths, Dangers, Strategies|date=2014|publisher=Oxford University Press|___location=Oxford|isbn=9780199678112|title-link=Superintelligence: Paths, Dangers, Strategies |chapter=Chapter 7: The Superintelligent Will}}</ref>
 
[[Alexander Wissner-Gross]] says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.<ref>{{cite web | last=Dvorsky | first=George | title=How Skynet Might Emerge From Simple Physics | website=Gizmodo | date=2013-04-26 | url=https://gizmodo.com/how-skynet-might-emerge-from-simple-physics-482402911}}</ref><ref>{{cite journal | last1 = Wissner-Gross | first1 = A. D. | author-link1 = Alexander Wissner-Gross | last2 = Freer | first2 = C. E. | author-link2 = Cameron Freer | year = 2013 | title = Causal entropic forces | url = http://www.alexwg.org/link?url=http%3A%2F%2Fwww.alexwg.org%2Fpublications%2FPhysRevLett_110-168702.pdf | journal = Physical Review Letters | volume = 110 | issue = 16 | page = 168702 | doi = 10.1103/PhysRevLett.110.168702 | pmid = 23679649 | bibcode = 2013PhRvL.110p8702W | doi-access = free | access-date = 2013-06-22 | archive-date = 2020-01-11 | archive-url = https://web.archive.org/web/20200111121354/http://www.alexwg.org/link?url=http%3A%2F%2Fwww.alexwg.org%2Fpublications%2FPhysRevLett_110-168702.pdf | url-status = dead }}</ref>
 
Luke Muehlhauser, writing for the [[Machine Intelligence Research Institute]], recommends that [[machine ethics]] researchers adopt what [[Bruce Schneier]] has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.<ref name=MuehlhauserSecurity2013>{{cite web|last1=Muehlhauser|first1=Luke|title=AI Risk and the Security Mindset|url=http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/|website=Machine Intelligence Research Institute|access-date=15 July 2014|date=31 Jul 2013}}</ref>