Laws of robotics: Difference between revisions

Content deleted Content added
Undid revision 1021456082 by 2600:1700:3811:510:E049:5364:7895:D2F6 (talk) several of these changes alter information that comes from sources
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5
 
(31 intermediate revisions by 23 users not shown)
Line 1:
{{Short description|Robotics}}
{{Robotic laws}}
The '''Laws of [[Robotics]]robotics''' are aany set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of [[robot]]s designed to have a degree of [[autonomy]]. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in [[science fiction]], [[movie|films]] and are a topic of active [[research and development]] in the fields of [[robotics]] and [[artificial intelligence]].
 
The best known set of laws are [[Three Laws of Robotics|those written]] by [[Isaac Asimov]] in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then.
Line 13 ⟶ 14:
# A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.<ref>{{cite book|last=Asimov|first=Isaac|title=I, Robot|date=1950}}</ref>
 
In "[[The Evitable Conflict]]" the machines generalize the First Law to mean:
 
# "No machine may harm humanity; or, through inaction, allow humanity to come to harm."
 
This was refined in the end of ''[[Foundation and Earth]]'',. aA zeroth law was introduced, with the original three suitably rewritten as subordinate to it:
:{{ordered list|start=0. |A robot may not injure humanity, or, by inaction, allow humanity to come to harm.}}
 
Adaptations and extensions exist based upon this framework. {{As of 2011|2024}} they remain a "[[fictional device]]".<ref name="revolution">{{cite news|last=Stewart|first=Jon|title=Ready for the robot revolution?|url=https://www.bbc.co.uk/news/technology-15146053|access-date=2011-10-03|newspaper=[[BBC News]]|date=2011-10-03}}</ref>
 
=== Additional laws ===
Authors other than Asimov have often created extra laws.
 
The 1974 [[Lyuben Dilov]] novel, ''Icarus's Way'' (a.k.a., ''The Trip of Icarus'') introduced a Fourth Law of robotics: "A robot must establish its identity as a robot in all cases."
Dilov gives reasons for the fourth safeguard in this way: "The last Law has put an end to the expensive aberrations of designers to give psychorobots as humanlike a form as possible. And to the resulting misunderstandings...".<ref>{{cite book
| last = Dilov
| first = Lyuben (aka Lyubin, Luben or Liuben)
| author-link = Lyuben Dilov
| title = Пътят на Икар
| year = 2002
| publisher = Захари Стоянов
| isbn = 978-954-739-338-7}}</ref> More formally, in 2024 [[Dariusz Jemielniak]] in an article in [[IEEE Spectrum]] proposed a Fourth Law of Robotics: "A robot or AI must not deceive a human by impersonating a human being."<ref>{{Cite web |title=We Need a Fourth Law of Robotics for AI - IEEE Spectrum |url=https://spectrum.ieee.org/isaac-asimov-robotics |access-date=2025-02-03 |website=spectrum.ieee.org |language=en}}</ref><ref>{{Cite web |date=2025-01-24 |title=A Fourth Law of Robotics {{!}} Berkman Klein Center |url=https://cyber.harvard.edu/story/2025-01/fourth-law-robotics |access-date=2025-02-03 |website=cyber.harvard.edu |language=en}}</ref><ref>{{Cite web |date=2025-01-15 |title=Ki kell egészíteni Asimov robotikai törvényeit az AI miatt |url=https://www.blikk.hu/ferfiaknak/tech/robotika-torvenyei/nxbvh73 |access-date=2025-02-03 |website=Blikk |language=hu}}</ref><ref>{{Cite web |last=Tecnológica |first=Site Inovação |date=2025-01-21 |title=Leis da Robótica de Asimov precisam de atualização para IA |url=https://www.inovacaotecnologica.com.br/noticias/noticia.php?artigo=leis-robotica-asimov-precisam-atualizacao-ia&id=010180250121 |access-date=2025-02-03 |website=Site Inovação Tecnológica |language=pt}}</ref><ref>{{Cite news |last=Jaśkowiak |first=Piotr |date=2025-02-01 |title=Asimovowi zabrakło wyobraźni. Potrzebujemy Czwartego Prawa Robotyki |url=https://ssl.audycje.tokfm.pl/podcast/170407,Asimovowi-zabraklo-wyobrazni-Potrzebujemy-Czwartego-Prawa-Robotyki-a-na-antenie-tworzymy-Piate |work=Radio TokFM}}</ref>
 
A fifth law was introduced by [[Nikola Kesarovski]] in his short story "The Fifth Law of Robotics". This fifth law says: "A robot must know it is a robot."
The plot revolves around a murder where the forensic investigation discovers that the victim was killed by a hug from a humaniform robot that did not establish for itself that it was a robot.<ref>{{cite book
| last = Кесаровски
| first = Никола
| author-link = Nikola Kesarovski
| title = Петият закон
| year = 1983
| publisher = Отечество
}}</ref> The story was reviewed by [[Valentin D. Ivanov]] in SFF review webzine ''The Portal''.<ref>{{Cite web |url=http://sffportal.net/2011/06/lawful-little-country-the-bulgarian-laws-of-robotics/#more-2376 |title=Lawful Little Country: The Bulgarian Laws of Robotics {{!}} The Portal<!-- Bot generated title --> |access-date=2023-02-08 |archive-date=2011-10-06 |archive-url=https://web.archive.org/web/20111006052447/http://sffportal.net/2011/06/lawful-little-country-the-bulgarian-laws-of-robotics/#more-2376 |url-status=dead }}</ref>
 
For the 1986 tribute anthology, ''[[Foundation's Friends]],'' [[Harry Harrison (writer)|Harry Harrison]] wrote a story entitled, "The Fourth Law of Robotics". This Fourth Law states: "A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law."
 
In 2013 [[Hutan Ashrafian]] proposed an additional law that considered the role of artificial intelligence-on-artificial intelligence or the relationship between robots themselves – the so-called AIonAI law.<ref>{{cite journal |last= Ashrafian |first= Hutan| year= 2014|title= AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics |journal= Science and Engineering Ethics |volume= 21 |issue= 1 |pages= 29–40 | doi= 10.1007/s11948-013-9513-9 |pmid= 24414678 |s2cid= 2821971}}</ref> This sixth law states: "All robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood."
 
== EPSRC / AHRC principles of robotics ==
In 2011, the [[Engineering and Physical Sciences Research Council]] (EPSRC) and the [[Arts and Humanities Research Council]] (AHRC) of [[United Kingdom]] jointly published a set of five ethical "principles for designers, builders and users of robots" in the [[wikt:real world|real world]], along with seven "high-level messages" intended to be conveyed, based on a September 2010 research workshop:<ref name="revolution" /><ref>{{cite web|title=Principles of robotics: Regulating Robots in the Real World|url=http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/|publisher=[[Engineering and Physical Sciences Research Council]]|access-date=2011-10-03}}</ref><ref>{{cite web|last=Winfield|first=Alan|title=Five roboethical principles – for humans|url=https://www.newscientist.com/article/mg21028111.100-five-roboethical-principles--for-humans.htm|publisher=[[New Scientist]]|access-date=2011-10-03}}</ref>
 
# Robots should not be designed solely or primarily to kill or harm humans.
Line 39 ⟶ 67:
# We should consider the ethics of transparency: are there limits to what should be openly available?
# When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.
The EPSRC principles are broadly recognised as a useful starting point. In 2016 Tony Prescott organised a workshop to revise these principles, e.g. to differentiate ethical from legal principles.<ref>{{cite journal|date=2017|title=Legal vs. ethical obligations – a comment on the EPSRC’sEPSRC's principles for robotics|url=https://philpapers.org/rec/MLLLVE|journal=Connection Science|doi=10.1080/09540091.2016.1276516|author=Müller, Vincent C.|volume=29 |issue=2 |pages=137–141 |bibcode=2017ConSc..29..137M |s2cid=19080722 }}</ref>
 
==Judicial development==
Line 45 ⟶ 73:
 
== Satya Nadella's laws ==
In June 2016, [[Satya Nadella]], athe CEO of [[Microsoft Corporation]] at the time, had an interview with the ''[[Slate (magazine)|Slate]]'' magazine and roughlyreflected sketchedon fivewhat ruleskinds forof artificialprinciples intelligencesand togoals should be observedconsidered by theirindustry designersand society when discussing artificial intelligences:<ref>{{Cite news|url=http://www.slate.com/articles/technology/future_tense/2016/06/microsoft_ceo_satya_nadella_humans_and_a_i_can_work_together_to_solve_society.html|title=The Partnership of the Future|last=Nadella|first=Satya|date=2016-06-28|newspaper=[[Slate (magazine)|Slate]]|issn=1091-2339|access-date=2016-06-30}}</ref><ref>{{Cite web|url=https://www.theverge.com/2016/6/29/12057516/satya-nadella-ai-robot-laws|title=Satya Nadella's rules for AI are more boring (and relevant) than Asimov's Three Laws|last=Vincent|first=James|date=2016-06-29|website=[[The Verge]]|publisher=[[Vox Media]]|access-date=2016-06-30}}</ref>
# "A.I. must be designed to assist humanity", meaning human autonomy needs to be respected.
# "A.I. must be transparent" meaning that humans should know and be able to understand how they work.
# "A.I. must maximize efficiencies without destroying the dignity of people"."
# "A.I. must be designed for intelligent privacy" meaning that it earns trust through guarding their information.
# "A.I. must have algorithmic accountability so that humans can undo unintended harm"."
# "A.I. must guard against bias" so that they must not discriminate against people.
 
==Tilden's "Laws of Robotics"laws ==
[[Mark W. Tilden]] is a robotics physicist who was a pioneer in developing simple robotics.<ref name=wired1>{{cite newsmagazine| url=https://www.wired.com/wired/archive/2.09/tilden.html?pg=1&topic= | workmagazine=Wired | first=Fred | last=Hapgood | title=Chaotic Robotics | issue = 2.099 | date = September 1994| volume=2 }}</ref>

Tilden Hislater disparaged his earlier work as "wimpy" for having been based on the human-centric Asimov laws. He created three new guiding principles/rules for robots"wild" arerobots:<ref name=wired1/><ref>{{cite web |first=Ashley |last=Dunn. "[|url=http://partners.nytimes.com/library/cyber/surf/0605surf.html |title=Machine Intelligence, Part II: From Bumper Cars to Electronic Minds]" ''|work=[[The New York Times]]'' 5 June |date=1996. Retrieved-06-05 |accessdate=2009-07-26 July 2009.}}</ref><ref>[{{cite web |url=http://makezine.com/06/beam/ |title=makezine.com: A Beginner's Guide to BEAM<!-- Bot generated title -->] |quote=(Most of the article is subscription-only content.) }}</ref>
 
# ''A robot must protect its existence at all costs.''
# ''A robot must obtain and maintain access to its own power source.''
# ''A robot must continually search for better power sources.''
 
What is notable inWithin these three rules is that these are basically rules for "wild" life, so in essence what Tilden stated is thatbasically whatstating hehis wantedgoal wasas: "...proctoring a silicon species into sentience, but with full control over the specs. Not plant. Not animal. Something else."<ref>{{cite newsmagazine| url=https://www.wired.com/wired/archive/2.09/tilden.html?pg=2&topic= | workmagazine=Wired | first=Fred | last=Hapgood | title=Chaotic Robotics (continued) | issue = 2.099 | date = September 1994| volume=2 }}</ref>
 
==See also==
Line 72 ⟶ 102:
 
==References ==
{{reflist|30em}}<ref>17. Announcer (2011). [[Portal 2]]</ref>{{Robotics}}
 
<references />
{{Robotics}}
 
[[Category:Robotics]]
[[Category:Robotics engineering]]