Content deleted Content added
Undid revision 1021456082 by 2600:1700:3811:510:E049:5364:7895:D2F6 (talk) several of these changes alter information that comes from sources |
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5 |
||
(31 intermediate revisions by 23 users not shown) | |||
Line 1:
{{Short description|Robotics}}
{{Robotic laws}}
The best known set of laws are [[Three Laws of Robotics|those written]] by [[Isaac Asimov]] in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then.
Line 13 ⟶ 14:
# A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.<ref>{{cite book|last=Asimov|first=Isaac|title=I, Robot|date=1950}}</ref>
In "[[The Evitable Conflict]]" the machines generalize the First Law to mean:
#
This was refined in the end of ''[[Foundation and Earth]]''
Adaptations and extensions exist based upon this framework. {{As of
=== Additional laws ===
Authors other than Asimov have often created extra laws.
The 1974 [[Lyuben Dilov]] novel, ''Icarus's Way'' (a.k.a., ''The Trip of Icarus'') introduced a Fourth Law of robotics: "A robot must establish its identity as a robot in all cases."
Dilov gives reasons for the fourth safeguard in this way: "The last Law has put an end to the expensive aberrations of designers to give psychorobots as humanlike a form as possible. And to the resulting misunderstandings...".<ref>{{cite book
| last = Dilov
| first = Lyuben (aka Lyubin, Luben or Liuben)
| author-link = Lyuben Dilov
| title = Пътят на Икар
| year = 2002
| publisher = Захари Стоянов
| isbn = 978-954-739-338-7}}</ref> More formally, in 2024 [[Dariusz Jemielniak]] in an article in [[IEEE Spectrum]] proposed a Fourth Law of Robotics: "A robot or AI must not deceive a human by impersonating a human being."<ref>{{Cite web |title=We Need a Fourth Law of Robotics for AI - IEEE Spectrum |url=https://spectrum.ieee.org/isaac-asimov-robotics |access-date=2025-02-03 |website=spectrum.ieee.org |language=en}}</ref><ref>{{Cite web |date=2025-01-24 |title=A Fourth Law of Robotics {{!}} Berkman Klein Center |url=https://cyber.harvard.edu/story/2025-01/fourth-law-robotics |access-date=2025-02-03 |website=cyber.harvard.edu |language=en}}</ref><ref>{{Cite web |date=2025-01-15 |title=Ki kell egészíteni Asimov robotikai törvényeit az AI miatt |url=https://www.blikk.hu/ferfiaknak/tech/robotika-torvenyei/nxbvh73 |access-date=2025-02-03 |website=Blikk |language=hu}}</ref><ref>{{Cite web |last=Tecnológica |first=Site Inovação |date=2025-01-21 |title=Leis da Robótica de Asimov precisam de atualização para IA |url=https://www.inovacaotecnologica.com.br/noticias/noticia.php?artigo=leis-robotica-asimov-precisam-atualizacao-ia&id=010180250121 |access-date=2025-02-03 |website=Site Inovação Tecnológica |language=pt}}</ref><ref>{{Cite news |last=Jaśkowiak |first=Piotr |date=2025-02-01 |title=Asimovowi zabrakło wyobraźni. Potrzebujemy Czwartego Prawa Robotyki |url=https://ssl.audycje.tokfm.pl/podcast/170407,Asimovowi-zabraklo-wyobrazni-Potrzebujemy-Czwartego-Prawa-Robotyki-a-na-antenie-tworzymy-Piate |work=Radio TokFM}}</ref>
A fifth law was introduced by [[Nikola Kesarovski]] in his short story "The Fifth Law of Robotics". This fifth law says: "A robot must know it is a robot."
The plot revolves around a murder where the forensic investigation discovers that the victim was killed by a hug from a humaniform robot that did not establish for itself that it was a robot.<ref>{{cite book
| last = Кесаровски
| first = Никола
| author-link = Nikola Kesarovski
| title = Петият закон
| year = 1983
| publisher = Отечество
}}</ref> The story was reviewed by [[Valentin D. Ivanov]] in SFF review webzine ''The Portal''.<ref>{{Cite web |url=http://sffportal.net/2011/06/lawful-little-country-the-bulgarian-laws-of-robotics/#more-2376 |title=Lawful Little Country: The Bulgarian Laws of Robotics {{!}} The Portal<!-- Bot generated title --> |access-date=2023-02-08 |archive-date=2011-10-06 |archive-url=https://web.archive.org/web/20111006052447/http://sffportal.net/2011/06/lawful-little-country-the-bulgarian-laws-of-robotics/#more-2376 |url-status=dead }}</ref>
For the 1986 tribute anthology, ''[[Foundation's Friends]],'' [[Harry Harrison (writer)|Harry Harrison]] wrote a story entitled, "The Fourth Law of Robotics". This Fourth Law states: "A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law."
In 2013 [[Hutan Ashrafian]] proposed an additional law that considered the role of artificial intelligence-on-artificial intelligence or the relationship between robots themselves – the so-called AIonAI law.<ref>{{cite journal |last= Ashrafian |first= Hutan| year= 2014|title= AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics |journal= Science and Engineering Ethics |volume= 21 |issue= 1 |pages= 29–40 | doi= 10.1007/s11948-013-9513-9 |pmid= 24414678 |s2cid= 2821971}}</ref> This sixth law states: "All robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood."
== EPSRC / AHRC principles of robotics ==
In 2011, the [[Engineering and Physical Sciences Research Council]] (EPSRC) and the [[Arts and Humanities Research Council]] (AHRC) of [[United Kingdom]] jointly published a set of five ethical "principles for designers, builders and users of robots" in the [[wikt:real world|real world]], along with seven "high-level messages" intended to be conveyed, based on a September 2010 research workshop:<ref name="revolution" /><ref>{{cite web|title=Principles of robotics:
# Robots should not be designed solely or primarily to kill or harm humans.
Line 39 ⟶ 67:
# We should consider the ethics of transparency: are there limits to what should be openly available?
# When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.
The EPSRC principles are broadly recognised as a useful starting point. In 2016 Tony Prescott organised a workshop to revise these principles, e.g. to differentiate ethical from legal principles.<ref>{{cite journal|date=2017|title=Legal vs. ethical obligations – a comment on the
==Judicial development==
Line 45 ⟶ 73:
== Satya Nadella's laws ==
In June 2016, [[Satya Nadella]],
# "A.I. must be designed to assist humanity", meaning human autonomy needs to be respected.
# "A.I. must be transparent" meaning that humans should know and be able to understand how they work.
# "A.I. must maximize efficiencies without destroying the dignity of people
# "A.I. must be designed for intelligent privacy" meaning that it earns trust through guarding their information.
# "A.I. must have algorithmic accountability so that humans can undo unintended harm
# "A.I. must guard against bias" so that they must not discriminate against people.
==Tilden's
[[Mark W. Tilden]] is a robotics physicist who was a pioneer in developing simple robotics.<ref name=wired1>{{cite
Tilden #
#
#
==See also==
Line 72 ⟶ 102:
==References ==
{{reflist|30em}}<ref>17. Announcer (2011). [[Portal 2]]</ref>{{Robotics}}
<references />
[[Category:Robotics]]
[[Category:Robotics engineering]]
|