Content deleted Content added
Olexa Riznyk (talk | contribs) Enhancing layout |
TechScribeNY (talk | contribs) m added internal links |
||
Line 22:
In 2008 Eliezer Yudkowsky called for the creation of "friendly AI" to mitigate [[existential risk from advanced artificial intelligence]]. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."<ref>{{cite book |author=[[Eliezer Yudkowsky]] |year=2008 |chapter-url=http://intelligence.org/files/AIPosNegFactor.pdf |chapter=Artificial Intelligence as a Positive and Negative Factor in Global Risk |title=Global Catastrophic Risks |pages=308–345 |editor1=Nick Bostrom |editor2=Milan M. Ćirković |access-date=2013-10-19 |archive-date=2013-10-19 |archive-url=https://web.archive.org/web/20131019182403/http://intelligence.org/files/AIPosNegFactor.pdf |url-status=live }}</ref>
[[Steve Omohundro]] says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of [[Instrumental convergence#Basic AI drives|basic "drives"]], such as resource acquisition, [[self-preservation]], and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.<ref>{{cite journal |last=Omohundro |first=S. M. |date=February 2008 |title=The basic AI drives |journal=Artificial General Intelligence |volume=171 |pages=483–492 |citeseerx=10.1.1.393.8356}}</ref><ref>{{cite book|last1=Bostrom|first1=Nick|title=Superintelligence: Paths, Dangers, Strategies|date=2014|publisher=Oxford University Press|___location=Oxford|isbn=9780199678112|title-link=Superintelligence: Paths, Dangers, Strategies |chapter=Chapter 7: The Superintelligent Will}}</ref>
[[Alexander Wissner-Gross]] says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.<ref>{{cite web | last=Dvorsky | first=George | title=How Skynet Might Emerge From Simple Physics | website=Gizmodo | date=2013-04-26 | url=https://gizmodo.com/how-skynet-might-emerge-from-simple-physics-482402911 | access-date=2021-12-23 | archive-date=2021-10-08 | archive-url=https://web.archive.org/web/20211008105300/https://gizmodo.com/how-skynet-might-emerge-from-simple-physics-482402911 | url-status=live }}</ref><ref>{{cite journal | last1 = Wissner-Gross | first1 = A. D. | author-link1 = Alexander Wissner-Gross | last2 = Freer | first2 = C. E. | author-link2 = Cameron Freer | year = 2013 | title = Causal entropic forces | journal = Physical Review Letters | volume = 110 | issue = 16 | page = 168702 | doi = 10.1103/PhysRevLett.110.168702 | pmid = 23679649 | bibcode = 2013PhRvL.110p8702W | doi-access = free | hdl = 1721.1/79750 | hdl-access = free }}</ref>
Line 51:
==Public policy==
[[James Barrat]], author of ''[[Our Final Invention]]'', suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about security—something like the [[International Atomic Energy Agency]], but in partnership with corporations." He urges AI researchers to convene a meeting similar to the [[Asilomar Conference on Recombinant DNA]], which discussed [[risks of biotechnology]].<ref name=Hendry2014 />
[[John McGinnis]] encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI are not necessarily eminent, he suggests a model similar to the [[National Institutes of Health]], where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of the [[Machine Intelligence Research Institute]], which generally aims to avoid government involvement in friendly AI.<ref name=McGinnis2010>{{cite journal|last1=McGinnis|first1=John O.|title=Accelerating AI|journal=Northwestern University Law Review|date=Summer 2010|volume=104|issue=3|pages=1253–1270|url=http://www.law.northwestern.edu/LAWREVIEW/Colloquy/2010/12/|access-date=16 July 2014|archive-date=1 December 2014|archive-url=https://web.archive.org/web/20141201201600/http://www.law.northwestern.edu/LAWREVIEW/Colloquy/2010/12/|url-status=live}}</ref>
|