Content deleted Content added
expand and organize |
copyedit |
||
(24 intermediate revisions by 10 users not shown) | |||
Line 1:
{{Short description|Software user interface}}
{{Machine learning}}
'''Human-in-the-loop'''
==Machine learning==
In machine learning, HITL is used in the sense of humans aiding the computer in making the correct decisions in building a model.<ref name=medium /> HITL improves machine learning over random sampling by selecting the most critical data needed to refine the model.<ref name=schizophrenia>{{cite journal|title=Improving the Applicability of AI for Psychiatric Applications through Human-in-the-loop Methodologies|author1=Chelsea Chandler|author2=Peter W Foltz|author3=Brita Elvevåg|date=26 May 2022|journal=Schizophrenia Bulletin|volume=48|issue=5|pages=949–956|doi=10.1093/schbul/sbac038|pmid=35639561 |pmc=9434423 }}</ref>
==Simulation==
In simulation, HITL models may conform to [[human factors]] requirements as in the case of a [[mockup]].
HITL is often referred to as interactive simulation, which is a special kind of physical simulation in which physical simulations include human operators, such as in a [[flight simulator|flight]] or a [[driving simulator]].
===Benefits===
Human-in-the-loop allows the user to change the outcome of an event or process.
HITL also allows for the acquisition of knowledge regarding how a new process may affect a particular event. Utilizing HITL allows participants to interact with realistic models and attempt to perform as they would in an actual scenario. HITL simulations bring to the surface issues that would not otherwise be apparent until after a new process has been deployed.
As with most processes, there is always the possibility of [[human error]], which can only be reproduced using HITL simulation.
=== Within virtual simulation taxonomy ===
Line 23 ⟶ 25:
*[[Driving simulators]]
*Marine simulators
*
* Supply chain management simulators<ref>{{Cite journal |last=Pinto
* [[Digital puppetry]]
Line 31 ⟶ 33:
==Weapons==
{{main|Lethal autonomous weapon}}
Three classifications of the degree of human control of autonomous weapon systems were laid out by [[Bonnie Docherty]] in a 2012 [[Human Rights Watch]] report.<ref name=army />
*'''human-in-the-loop''': a human must instigate the action of the weapon (in other words not fully autonomous)
*'''human-on-the-loop''': a human may abort an action
*'''human-out-of-the-loop''': no human action is involved
==See also==
{{Wiktionary|in the loop}}
*[[Humanistic
* [[Reinforcement learning from human feedback]]
* [[MIM-104_Patriot#US-led_invasion_of_Iraq_(2003)|MIM-104 Patriot]] - Examples of a human-on-the-loop lethal autonomous weapon system posing a threat to friendly forces.
==References==
{{Reflist}}
[[Category:Ethics of science and technology]]
[[Category:Military technology]]
[[Category:Military terminology]]
[[Category:Military robots]]
[[Category:Military simulation]]
▲[[Category:Emerging technologies]]
|