Content deleted Content added
→See also: might be wrong but I guess they can't kill me (for this) |
Citation bot (talk | contribs) Removed proxy/dead URL that duplicated identifier. Removed access-date with no URL. Removed parameters. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine |
||
Line 21:
In 2008 Eliezer Yudkowsky called for the creation of "friendly AI" to mitigate [[existential risk from advanced artificial intelligence]]. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."<ref>{{cite book |author=[[Eliezer Yudkowsky]] |year=2008 |chapter-url=http://intelligence.org/files/AIPosNegFactor.pdf |chapter=Artificial Intelligence as a Positive and Negative Factor in Global Risk |title=Global Catastrophic Risks |pages=308–345 |editor1=Nick Bostrom |editor2=Milan M. Ćirković}}</ref>
[[Steve Omohundro]] says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of [[Instrumental convergence#Basic AI drives|basic "drives"]], such as resource acquisition, self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.<ref>{{cite journal |last=Omohundro |first=S. M. |date=February 2008 |title=The basic AI drives |journal=Artificial General Intelligence |volume=171 |pages=483–492
[[Alexander Wissner-Gross]] says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.<ref>{{cite web | last=Dvorsky | first=George | title=How Skynet Might Emerge From Simple Physics | website=Gizmodo | date=2013-04-26 | url=https://gizmodo.com/how-skynet-might-emerge-from-simple-physics-482402911}}</ref><ref>{{cite journal | last1 = Wissner-Gross | first1 = A. D. | author-link1 = Alexander Wissner-Gross | last2 = Freer | first2 = C. E. | author-link2 = Cameron Freer | year = 2013 | title = Causal entropic forces
Luke Muehlhauser, writing for the [[Machine Intelligence Research Institute]], recommends that [[machine ethics]] researchers adopt what [[Bruce Schneier]] has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.<ref name=MuehlhauserSecurity2013>{{cite web|last1=Muehlhauser|first1=Luke|title=AI Risk and the Security Mindset|url=http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/|website=Machine Intelligence Research Institute|access-date=15 July 2014|date=31 Jul 2013}}</ref>
|