Human-in-the-loop: Difference between revisions

Content deleted Content added
copyedit
 
(2 intermediate revisions by 2 users not shown)
Line 7:
 
==Simulation==
In simulation, HITL models may conform to [[human factors]] requirements as in the case of a [[mockup]]. In this type of simulation, a human is always part of the simulation and consequently influences the outcome in such a way that is difficult if not impossible to reproduce exactly. HITL also readily allows for the identification of problems and requirements that may not be easily identified by other means of simulation.
 
HITL is often referred to as interactive simulation, which is a special kind of physical simulation in which physical simulations include human operators, such as in a [[flight simulator|flight]] or a [[driving simulator]].
 
===Benefits===
Human-in-the-loop allows the user to change the outcome of an event or process. The immersion effectively contributes to a positive transfer of acquired skills into the real world. This can be demonstrated by trainees utilizing flight simulators in preparation to become pilots.
 
HITL also allows for the acquisition of knowledge regarding how a new process may affect a particular event. Utilizing HITL allows participants to interact with realistic models and attempt to perform as they would in an actual scenario. HITL simulations bring to the surface issues that would not otherwise be apparent until after a new process has been deployed. A real-world example of HITL simulation as an evaluation tool is its usage by the [[Federal Aviation Administration]] (FAA) to allow air traffic controllers to test new automation procedures by directing the activities of simulated air traffic while monitoring the effect of the newly implemented procedures.<ref>Sollenberger, R. (2005). Human-in-the-Loop Simulation Evaluating the Collocation of the User Request Evaluation Tool. U.S. Department of Transportation Federal Aviation Administration, 1. Retrieved July 19, 2010, from http://hf.tc.faa.gov/technotes/dot-faa-ct-tn04-28.pdf {{Webarchive|url=https://web.archive.org/web/20100609162019/http://hf.tc.faa.gov/technotes/dot-faa-ct-tn04-28.pdf |date=2010-06-09 }}</ref>
 
As with most processes, there is always the possibility of [[human error]], which can only be reproduced using HITL simulation. Although much can be done to automate systems, humans typically still need to take the information provided by a system to determine the next course of action based on their judgment and experience. Intelligent systems can only go so far in certain circumstances to automate a process; only humans in the simulation can accurately judge the final design. Tabletop simulation may be useful in the very early stages of project development for the purpose of collecting data to set broad parameters, but the important decisions require human-in-the-loop simulation.<ref>Human{{Cite web |date=2007 |title=‘Human-in-the-looploop’ simulation: (2007,The Spring).right Porttool Technologyfor International,port 32, 1-2. Retrieved July 19, 2010, fromdesign |url=http://www.marinesafety.com/research/documents/HumanintheloopSimulationasPublishedinPortTechnologyInternationalIssue32.pdf {{Webarchive|url-status=dead |archive-url=https://web.archive.org/web/20110714034605/http://www.marinesafety.com/research/documents/HumanintheloopSimulationasPublishedinPortTechnologyInternationalIssue32.pdf |archive-date=2011-07-14 |website=marinesafety.com |publisher=Port Technology International}}</ref>
 
=== Within virtual simulation taxonomy ===
Line 26:
*Marine simulators
*[[Video games]]
* Supply chain management simulators<ref>{{Cite journal |last=Pinto R,|first=Roberto |last2=Mettler T,|first2=Tobias |last3=Taisch M|first3=Marco (|date=2013), ''[http://www.sciencedirect.com/science/article/pii/S0167923612002886-01-01 |title=Managing supplier delivery reliability risk under limited information: Foundations for a human-in-the-loop DSS]'', |url=https://www.sciencedirect.com/science/article/pii/S0167923612002886 |journal=Decision Support Systems, |volume=54: |issue=2, |pages=1076–1084 |doi=10.1016/j.dss.2012.10.033 |issn=0167-9236}}</ref>
* [[Digital puppetry]]
 
Line 33:
 
==Weapons==
{{main|Lethal autonomous weapon}}
Three classifications of the degree of human control of autonomous weapon systems were laid out by [[Bonnie Docherty]] in a 2012 [[Human Rights Watch]] report.<ref name=army />
*'''human-in-the-loop''': a human must instigate the action of the weapon (in other words not fully autonomous)
Line 40 ⟶ 41:
==See also==
{{Wiktionary|in the loop}}
*[[Humanistic intelligence]], which is intelligence that arises by having the human in the feedback loop of the computational process<ref>[{{Cite journal |last=Minsky, |first=Marvin |last2=Kurzweil, |first2=Ray |last3=Mann, IEEE ISTAS|first3=Steve |date=2013] |title=The society of intelligent veillance |url=https://ieeexplore.ieee.org/document/6613095 |journal=Technology and Society |publisher=IEEE}}</ref>
* [[Reinforcement learning from human feedback]]
* [[MIM-104_Patriot#US-led_invasion_of_Iraq_(2003)|MIM-104 Patriot]] - Examples of a human-on-the-loop lethal autonomous weapon system posing a threat to friendly forces.
 
==References==