Content deleted Content added
m Corrected spelling |
|||
(28 intermediate revisions by 19 users not shown) | |||
Line 1:
{{Short description|Human research factorization and quantification system}}
'''Human performance modeling''' ('''HPM''') is a method of quantifying human behavior, cognition, and processes
== History ==
The [[Human Factors and Ergonomics Society]] (HFES) formed the [https://sites.google.com/view/hfes-hpmtg/ Human Performance Modeling Technical Group] in 2004. Although a recent discipline, [[Human factors and ergonomics|human factors]] practitioners have been constructing and applying models of human performance since [[World War II]]. Notable early examples of human performance models include Paul
|
|
| last2 = Pew
| first2 = Richard W.
Line 25 ⟶ 26:
Individual models vary in their origins, but share in their application and use for issues in the human factors perspective. These can be models of the products of human performance (e.g., a model that produces the same decision outcomes as human operators), the processes involved in human performance (e.g., a model that simulates the processes used to reach decisions), or both. Generally, they are regarded as belonging to one of three areas: perception & attention allocation, command & control, or cognition & memory; although models of other areas such as emotion, motivation, and social/group processes continue to grow burgeoning within the discipline. Integrated models are also of increasing importance<s>.</s> Anthropometric and biomechanical models are also crucial human factors tools in research and practice, and are used alongside other human performance models, but have an almost entirely separate intellectual history, being individually more concerned with static physical qualities than processes or interactions.<ref name=":1" />
The models are applicable in many number of industries and domains including military,<ref>Lawton, C. R., Campbell, J. E., & Miller, D. P. (2005). ''Human performance modeling for system of systems analytics: soldier fatigue'' (No. SAND2005-6569). Sandia National Laboratories.</ref><ref>Mitchell, D. K., & Samms, C. (2012). An Analytical Approach for Predicting Soldier Workload and Performance Using Human Performance Modeling. ''Human-Robot Interactions in Future Military Operations''.</ref> aviation,<ref>Foyle, D. C., & Hooey, B. L. (Eds.). (2007). ''Human performance modeling in aviation''. CRC Press.</ref> nuclear power,<ref>O’Hara, J. (2009). ''Applying Human Performance Models to Designing and Evaluating Nuclear Power Plants: Review Guidance and Technical Basis''. BNL-90676-2009). Upton, NY: Brookhaven National Laboratory.</ref> automotive,<ref>{{cite journal | last1 = Lim | first1 = J. H. | last2 = Liu | first2 = Y. | last3 = Tsimhoni | first3 = O. | year = 2010 | title = Investigation of driver performance with night-vision and pedestrian-detection systems—Part 2: Queuing network human performance modeling
== Model Categories ==
Line 36 ⟶ 37:
==== Pointing ====
Pointing at stationary targets such as buttons, windows, images, menu items, and controls on computer displays is commonplace and has a well-established modeling tool for analysis - [[Fitts
==== [[Control theory|Manual Control Theory]] ====
Line 57 ⟶ 58:
===== [[Visual search|Visual Search]] =====
A developed area in attention is the control of visual attention - models that attempt to answer, "where will an individual look next?" A subset of this concerns the question of visual search: How rapidly can a specified object in the visual field be located? This is a common subject of concern for human factors in a variety of domains, with a substantial history in cognitive psychology. This research continues with modern conceptions of [[Salience (neuroscience)|salience]] and [http://www.scholarpedia.org/article/Saliency_map salience maps]. Human performance modeling techniques in this area include the work of Melloy, Das, Gramopadhye, and Duchowski (2006) regarding [[Markov models]] designed to provide upper and lower bound estimates on the time taken by a human operator to scan a homogeneous display.<ref>{{cite journal | last1 = Melloy | first1 = B. J. | last2 = Das | first2 = S. | last3 = Gramopadhye | first3 = A. K. | last4 = Duchowski | first4 = A. T. | year = 2006 | title = A model of extended, semisystematic visual search | url =http://andrewd.ces.clemson.edu/research/vislab/docs/MDGD-HF-2006.pdf | journal = Human Factors: The Journal of the Human Factors and Ergonomics Society | volume = 48 | issue = 3| pages = 540–554 | doi=10.1518/001872006778606840| pmid = 17063968 | s2cid = 686156 }}</ref> Another example from Witus and Ellis (2003) includes a computational model regarding the detection of ground vehicles in complex images.<ref>{{cite journal | last1 = Witus | first1 = G. | last2 = Ellis | first2 = R. D. | year = 2003 | title = Computational modeling of foveal target detection
==== Visual Sampling ====
Many domains contain multiple displays, and require more than a simple discrete yes/no response time measurement. A critical question for these situations may be "How much time will operators spend looking at X relative to Y?" or "What is the likelihood that the operator will completely miss seeing a critical event?" Visual sampling is the primary means of obtaining information from the world.<ref name=":3">Cassavaugh, N. D., Bos, A., McDonald, C., Gunaratne, P., & Backs, R. W. (2013). Assessment of the SEEV Model to Predict Attention Allocation at Intersections During Simulated Driving. In ''7th International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design'' (No. 52).</ref> An early model in this ___domain is Sender's (1964, 1983) based upon operators monitoring of multiple dials, each with different rates of change.<ref>Senders, J. W. (1964). The human operator as a monitor and controller of multidegree of freedom systems. ''Human Factors in Electronics, IEEE Transactions on'', (1), 2-5.</ref><ref>Senders, J. W. (1983). ''Visual sampling processes'' (Doctoral dissertation, Universiteit van Tilburg).</ref> Operators try, as best as they can, to reconstruct the original set of dials based on discrete sampling. This relies on the mathematical [[Nyquist–Shannon sampling theorem|Nyquist theorem]] stating that a signal at W Hz can be reconstructed by sampling every 1/W seconds. This was combined with a measure of the information generation rate for each signal, to predict the optimal sampling rate and dwell time for each dial. Human limitations prevent human performance from matching optimal performance, but the predictive power of the model influenced future work in this area, such as Sheridan's (1970) extension of the model with considerations of access cost and information sample value.<ref name=":1" /><ref>{{cite journal | last1 = Sheridan | first1 = T | year = 1970 | title = On how often the supervisor should sample
A modern conceptualization by Wickens et al. (2008) is the salience, effort, expectancy, and value (SEEV) model. It was developed by the researchers (Wickens et al., 2001) as a model of scanning behavior describing the probability that a given area of interest will attract attention (AOI). The SEEV model is described by '''''p(A) = sS - efEF + (exEX)(vV)''''', in which ''p(A)'' is the probability a particular area will be samples, ''S'' is the ''salience'' for that area; ''EF'' represents the ''effort'' required in reallocating attention to a new AOI, related to the distance from the currently attended ___location to the AOI; ''EX'' (''expectancy'') is the expected event rate (bandwidth), and ''V'' is the value of the information in that AOI, represented as the product of Relevance and Priority (R*P).<ref name=":3" /> The lowercase values are scaling constants. This equation allows for the derivation of optimal and normative models for how an operator should behave, and to characterize how they behave. Wickens et al., (2008) also generated a version of the model that does not require absolute estimation of the free parameters for the environment - just the comparative salience of other regions compared to region of interest.<ref name=":1" />
Line 71 ⟶ 72:
==== [[Workload]] ====
Although an exact definition or method for measurement of the construct of workload is debated by the human factors community, a critical part of the notion is that human operators have some capacity limitations and that such limitations can be exceeded only at the risk of degrading performance. For physical workload, it may be understood that there is a maximum amount that a person should be asked to lift repeatedly, for example. However, the notion of workload becomes more contentious when the capacity to be exceeded is in regard to attention - what are the limits of human attention, and what exactly is
Byrne and Pew (2009) consider an example of a basic workload question: "To what extent do task A and B interfere?" These researchers indicate this as the basis for the ''[[psychological refractory period]]'' (PRP) paradigm. Participants perform two choice reaction-time tasks, and the two tasks will interfere to a degree - especially when the participant must react to the stimuli for the two tasks when they are close together in time - but the degree of interference is typically smaller than the total time taken for either task. The ''response selection bottleneck model'' (Pashler, 1994) models this situation well - in that each task has three components: perception, response selection (cognition), and motor output. The attentional limitation - and thus locus of workload - is that response selection can only be done for one task at a time. The model makes numerous accurate predictions, and those for which it cannot account are addressed by cognitive architectures (Byrne & Anderson, 2001; Meyer & Kieras, 1997). In simple dual-task situations, attention and workload are quantified, and meaningful predictions made possible.<ref name=":1" />
Line 79 ⟶ 80:
Although multiple resource theory the best known workload model in human factors, it is often represented qualitatively. The detailed computational implementations are better alternatives for application in HPM methods, to include the Horrey and Wickens (2003) model, which is general enough to be applied in many domains. Integrated approaches, such as task network modeling, are also becoming more prevalent in the literature.<ref name=":1" />
Numerical typing is an important perceptual-motor task whose performance may vary with different pacing, finger strategies and urgency of situations. Queuing network-model human processor (QN-MHP), a computational architecture, allows performance of perceptual-motor tasks to be modelled mathematically. The current study enhanced QN-MHP with a top-down control mechanism, a close-loop movement control and a finger-related motor control mechanism to account for task interference, endpoint reduction, and force deficit, respectively. The model also incorporated neuromotor noise theory to quantify endpoint variability in typing. The model predictions of typing speed and accuracy were validated with Lin and Wu's (2011) experimental results. The resultant root-meansquared errors were 3.68% with a correlation of 95.55% for response time, and 35.10% with a correlation of 96.52% for typing accuracy. The model can be applied to provide optimal speech rates for voice synthesis and keyboard designs in different numerical typing situations.<ref>{{Cite journal|title = Mathematically modelling the effects of pacing, finger strategies and urgency on numerical typing performance with queuing network model human processor|journal = Ergonomics|date = 2012-10-01|issn = 0014-0139|pmid = 22809389|pages = 1180–1204|volume = 55|issue = 10|doi = 10.1080/00140139.2012.697583|
The psychological refractory period (PRP) is a basic but important form of dual-task information processing. Existing serial or parallel processing models of PRP have successfully accounted for a variety of PRP phenomena; however, each also encounters at least 1 experimental counterexample to its predictions or modeling mechanisms. This article describes a queuing network-based mathematical model of PRP that is able to model various experimental findings in PRP with closed-form equations including all of the major counterexamples encountered by the existing models with fewer or equal numbers of free parameters. This modeling work also offers an alternative theoretical account for PRP and demonstrates the importance of the theoretical concepts of “queuing” and “hybrid cognitive networks” in understanding cognitive architecture and multitask performance.<ref>{{Cite journal|title = Queuing network modeling of the psychological refractory period (PRP).|journal = Psychological Review|pages = 913–954|volume = 115|issue = 4|doi = 10.1037/a0013123|
=== Cognition & Memory ===
The paradigm shift in psychology from behaviorism to the study of cognition had a huge impact on the field of Human Performance Modeling. Regarding memory and cognition, the research of Newell and Simon regarding artificial intelligence and the [[General Problem Solver]] (GPS; Newell & Simon, 1963), demonstrated that computational models could effectively capture fundamental human cognitive behavior. Newell and Simon were not simply concerned with the amount of information - say, counting the number of bits the human cognitive system had to receive from the perceptual system - but rather the actual computations being performed. They were critically involved with the early success of comparing cognition to computation, and the ability of computation to simulate critical aspects of cognition - thus leading to the creation of the sub-discipline of [[artificial intelligence]] within [[computer science]], and changing how cognition was viewed in the psychological community. Although cognitive processes do not literally flip bits in the same way that discrete electronic circuits do, pioneers were able to show that any universal computational machine could simulate the processes used in another, without a physical equivalence (Phylyshyn, 1989; Turing, 1936). The [[cognitive revolution]] allowed all of cognition to be approached by modeling, and these models now span a vast array of cognitive domains - from simple list memory, to comprehension of communication, to problem solving and decision making, to imagery, and beyond.<ref name=":1" />
One popular example is the Atkinson-Shiffrin (1968) [[Atkinson–Shiffrin memory model|"modal" model of memory]]. Also, please see [[Cognitive models|Cognitive Models]] for information not included here..
Line 105 ⟶ 106:
===== [[Situation awareness|Situation Awareness]] (SA) =====
Models of SA range from descriptive (Endsley, 1995) to computational (Shively et al., 1997).<ref name=":2" /><ref>{{cite journal | last1 = Endsley | first1 = M. R. | year = 1995 | title = Toward a theory of situation awareness in dynamic systems
model of situational awareness instantiated in MIDAS.
Line 124 ⟶ 125:
When a modeler builds a network model of a task, the first step is to construct a flow chart decomposing the task into discrete sub-tasks - each sub-task as a node, the serial and parallel paths connecting them, and the gating logic that governs the sequential flow through the resulting network. When modeling human-system performance, some nodes represent human decision processes and.or human task execution, some represent system execution sub-tasks, and some aggregate human/machine performance into a single node. Each node is represented by a statistically specified completion time distribution and a probability of completion. When all these specifications are programmed into a computer, the network is exercised repeatedly in Monte Carlo fashion to build up distributions of the aggregate performance measures that are of concern to the analyst. The art in this is in the modeler's selection of the right level of abstraction at which to represent nodes and paths and in estimating the statistically defined parameters for each node. Sometimes, human-in-the-loop simulations are conducted to provide support and validation for the estimates.. Detail regarding this, related, and alternative approaches may be found in Laughery, Lebiere, and Archer (2006) and in the work of Schwieckert and colleagues, such as Schweickert, Fisher, and Proctor (2003).<ref name=":1" />
Historically, Task Network Modeling stems from queuing theory and modeling of engineering reliability and quality control. Art Siegel, a psychologist, first though of extending reliability methods into a Monte Carlo simulation model of human-machine performance (Siegel & Wolf, 1969). In the early 1970s, the U.S. Air Force sponsored the development of '''SAINT''' (Systems Analysis of Integrated Networks of Tasks), a high-level programming language specifically designed to support the programming of Monte Carlo simulations of human-machine task networks (Wortman, Pritsker, Seum, Seifert, & Chubb, 1974). A modern version of this software is [[Micro Saint Sharp]] (Archer, Headley, & Allender, 2003). This family of software spawned a tree of special-purpose programs with varying degrees of commonality and specificity with Micro Saint. The most prominent of these is the [[IMPRINT (Improved Performance Research Integration Tool)|IMPRINT]] series (Improved Performance Research Integration Tool)<ref>Samms, C. (2010, September). Improved Performance Research Integration Tool (IMPRINT): Human Performance Modeling for Improved System Design. In''Proceedings of the Human Factors and Ergonomics Society Annual Meeting''(Vol. 54, No. 7, pp. 624-625). SAGE Publications.</ref> sponsored by the U.S. Army (and based on MANPRINT) which provides modeling templates specifically adapted to particular human performance modeling applications (Archer et al., 2003). Two workload-specific programs are W/INDEX (North & Riley, 1989) and WinCrew (Lockett, 1997).
The network approach to modeling using these programs is popular due to its technical accessibility to individual with general knowledge of computer simulation techniques and human performance analysis. The flowcharts that result from task analysis lead naturally to formal network models. The models can be developed to serve specific purposes - from simulation of an individual using a human-computer interface to analyzing potential traffic flow in a hospital emergency center. Their weakness is the great difficulty required to derive performance times and success probabilities from previous data or from theory or first principles. These data provide the model's principle content.
Line 133 ⟶ 134:
A model of a task in a cognitive architecture, generally referred to as a cognitive model, consists of both the architecture and the knowledge to perform the task. This knowledge is acquired through human factors methods including task analyses of the activity being modeled. Cognitive architectures are also connected with a complex simulation of the environment in which the task is to be performed - sometimes, the architecture interacts directly with the actual software humans use to perform the task. Cognitive architectures not only produce a prediction about performance, but also output actual performance data - able to produce time-stamped sequences of actions that can be compared with real human performance on a task.
Examples of cognitive architectures include the EPIC system (Hornof & Kieras, 1997, 1999), CPM-GOMS (Kieras, Wood, & Meyer, 1997), the Queuing Network-Model Human Processor (Wu & Liu, 2007, 2008),<ref name=":4">{{Cite journal|title = Queuing Network Modeling of Driver Workload and Performance|journal = IEEE Transactions on Intelligent Transportation Systems|date = 2007-09-01|issn = 1524-9050|pages = 528–537|volume = 8|issue = 3|doi = 10.1109/TITS.2007.903443|
The Queuing Network-Model Human Processor model was used to predict how drivers perceive the operating speed and posted speed limit, make choice of speed, and execute the decided operating speed. The model was sensitive (average d’ was 2.1) and accurate (average testing accuracy was over 86%) to predict the majority of unintentional speeding<ref name=":4" />
ACT-R has been used to model a wide variety of phenomena. It consists of several modules, each one modeling a different aspect of the human system. Modules are associated with specific brain regions, and the ACT-R has thus successfully predicted neural activity in parts of those regions. Each model essentially represents a theory of how that piece of the overall system works - derived from research literature in the area. For example, the declarative memory system in ACT-R is based on series of equations considering frequency and recency and that incorporate
=== Group Behavior ===
Line 147 ⟶ 148:
'''Computer Simulation Models/Approaches'''
Example: [[IMPRINT (Improved Performance Research Integration Tool)]]
'''Mathematical Models/Approaches'''
Line 245 ⟶ 246:
{{Reflist}}
[[Category:
[[Category:Software optimization]]
|