Friendly artificial intelligence: Difference between revisions

Content deleted Content added
updating a few cites
Citation bot (talk | contribs)
Alter: pages. Add: s2cid. Formatted dashes. | Use this bot. Report bugs. | Suggested by Anas1712 | #UCB_webform 468/2161
Line 21:
In 2008 Eliezer Yudkowsky called for the creation of “friendly AI” to mitigate [[existential risk from advanced artificial intelligence]]. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."<ref>{{cite book |author=[[Eliezer Yudkowsky]] |year=2008 |chapter-url=http://intelligence.org/files/AIPosNegFactor.pdf |chapter=Artificial Intelligence as a Positive and Negative Factor in Global Risk |title=Global Catastrophic Risks |pages=308–345 |editor1=Nick Bostrom |editor2=Milan M. Ćirković}}</ref>
 
[[Steve Omohundro]] says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of [[Instrumental convergence#Basic AI drives|basic "drives"]], such as resource acquisition, self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.<ref>{{cite journal |last=Omohundro |first=S. M. |date=February 2008 |title=The basic AI drives |journal=Artificial General Intelligence |volume=171 |pages=483-492483–492 |url=https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.393.8356&rep=rep1&type=pdf |citeseerx=10.1.1.393.8356}}</ref><ref>{{cite book|last1=Bostrom|first1=Nick|title=Superintelligence: Paths, Dangers, Strategies|date=2014|publisher=Oxford University Press|___location=Oxford|isbn=9780199678112|title-link=Superintelligence: Paths, Dangers, Strategies |chapter=Chapter 7: The Superintelligent Will}}</ref>
 
[[Alexander Wissner-Gross]] says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.<ref>{{cite web | last=Dvorsky | first=George | title=How Skynet Might Emerge From Simple Physics | website=Gizmodo | date=2013-04-26 | url=https://gizmodo.com/how-skynet-might-emerge-from-simple-physics-482402911}}</ref><ref>{{cite journal | last1 = Wissner-Gross | first1 = A. D. | author-link1 = Alexander Wissner-Gross | last2 = Freer | first2 = C. E. | author-link2 = Cameron Freer | year = 2013 | title = Causal entropic forces | url = http://www.alexwg.org/link?url=http%3A%2F%2Fwww.alexwg.org%2Fpublications%2FPhysRevLett_110-168702.pdf| journal = Physical Review Letters | volume = 110 | issue = 16| page = 168702 | doi = 10.1103/PhysRevLett.110.168702 | pmid = 23679649 | bibcode=2013PhRvL.110p8702W| doi-access = free }}</ref>
Line 62:
Some critics believe that both human-level AI and superintelligence are unlikely, and that therefore friendly AI is unlikely. Writing in ''[[The Guardian]]'', Alan Winfield compares human-level artificial intelligence with faster-than-light travel in terms of difficulty, and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|access-date=17 September 2014|work=[[The Guardian]]}}</ref> Boyles and Joaquin, on the other hand, argue that Luke Muehlhauser and [[Nick Bostrom]]’s proposal to create friendly AIs appear to be bleak. This is because Muehlhauser and Bostrom seem to hold the idea that intelligent machines could be programmed to think counterfactually about the moral values that humans beings would have had.<ref name=think13 /> In an article in ''[[AI & Society]]'', Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that a more ideal than the ones human beings possess at present, and the apparent disconnect between counterfactual antecedents and ideal value consequent.<ref name=boyles2019 />
 
Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful.<ref>{{cite journal | last=Kornai | first=András | title=Bounding the impact of AGI | journal=Journal of Experimental & Theoretical Artificial Intelligence | publisher=Informa UK Limited | volume=26 | issue=3 | date=2014-05-15 | issn=0952-813X | doi=10.1080/0952813x.2014.895109 | pages=417–438 | s2cid=7067517 |quote=...the essence of AGIs is their reasoning facilities, and it is the very logic of their being that will compel them to behave in a moral fashion... The real nightmare scenario (is one where) humans find it advantageous to strongly couple themselves to AGIs, with no guarantees against self-deception.}}</ref> Other critics question whether it is possible for an artificial intelligence to be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal ''[[The New Atlantis (journal)|The New Atlantis]]'', say that it will be impossible to ever guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes, but certainty and consensus on how one values the different outcomes.<ref>{{cite web|url=http://www.thenewatlantis.com/publications/the-problem-with-friendly-artificial-intelligence|author=Adam Keiper and Ari N. Schulman|title=The Problem with 'Friendly' Artificial Intelligence|publisher=The New Atlantis|access-date = 2012-01-16}}</ref>
 
==See also==