Feature integration theory: Difference between revisions

Content deleted Content added
Undid revision 525133970 by Porsche.boddicker (talk) Wikipedia uses lower cases
OAbot (talk | contribs)
m Open access bot: url-access updated in citation with #oabot.
 
(30 intermediate revisions by 21 users not shown)
Line 1:
{{Short description|Theory of human visual attention}}
The '''featureFeature integration theory''', is a theory of [[attention]] developed in 1980 by [[Anne Treisman]] and [[Garry Gelade]], that suggests that when perceiving a stimulistimulus, features are "registered early, automatically, and in parallel, while objects are identified separately" and at a later stage in processing. The theory has been one of the most influential [[cognitive model|psychological model]]s of human visual [[attention]].
__toc__
 
==Stages==
According to Treisman, the first stage of the Featurefeature Integrationintegration Theorytheory is the preattentive stage. Perception occursDuring automatically,this unconsciouslystage, effortlessly,different andparts early inof the perceptualbrain process.automatically Duringgather thisinformation stage,about thebasic objectfeatures is analyzed for details such as(colors, shape, color, orientation and movement,) withthat eachare aspect being processedfound in differentthe areasvisual of the brainfield. The idea that features are automatically separated appears to be counterintuitive;. howeverHowever, we are not aware of this process because it occurs early in perceptual processing, before we become conscious of the object.
 
The second stage of thefeature Featureintegration Integration Theorytheory is the focused attention stage, where thea subject combines individual features of an object combine in order to perceive the whole object. In order to combine theCombining individual features of an object, requires attention is required, and selection ofselecting that object occurs within a "master map" of locations. The master map of locations contains all of the locations in which features have been detected, with each ___location in the master map having access to the multiple feature maps. These multiple feature maps, or sub-maps, contain a large storage base of features. Features such as color, shape, orientation, sound, and movement are stored in these sub-maps <ref>{{Cite journal |last1=Kristjánsson |first1=Árni |last2=Egeth |first2=Howard |date=2020-01-01 |title=How feature integration theory integrated cognitive psychology, neurophysiology, and psychophysics |journal=Attention, Perception, & Psychophysics |language=en |volume=82 |issue=1 |pages=7–23 |doi=10.3758/s13414-019-01803-7 |issn=1943-393X|doi-access=free |pmid=31290134 }}</ref><ref>{{Cite journal |last1=Chan |first1=Louis K. H. |last2=Hayward |first2=William G. |date=2009 |title=Feature integration theory revisited: Dissociating feature detection and attentional guidance in visual search. |url=http://doi.apa.org/getdoi.cfm?doi=10.1037/0096-1523.35.1.119 |journal=Journal of Experimental Psychology: Human Perception and Performance |language=en |volume=35 |issue=1 |pages=119–132 |doi=10.1037/0096-1523.35.1.119 |pmid=19170475 |issn=1939-1277|url-access=subscription }}</ref>.When attention is focused at a particular ___location on the map, the features currently in that position are attended to and are stored in "object files". If the object is familiar, associations are made between the object and prior knowledge, which results in identification of that object. This top-down process, using prior knowledge to inform a current situation or decision, is paramount in either identifying or recognizing objects.<ref>{{Cite book |last1=Nobre |first1=Kia |url=https://books.google.com/books?id=mtXQAgAAQBAJ |title=The Oxford Handbook of Attention |last2=Kastner |first2=Sabine |date=2014 |publisher=OUP Oxford |isbn=978-0-19-967511-1 |language=en}}</ref><ref>{{Cite journal |last1=Chan |first1=Louis K. H. |last2=Hayward |first2=William G. |date=2009 |title=Feature integration theory revisited: Dissociating feature detection and attentional guidance in visual search. |url=http://doi.apa.org/getdoi.cfm?doi=10.1037/0096-1523.35.1.119 |journal=Journal of Experimental Psychology: Human Perception and Performance |language=en |volume=35 |issue=1 |pages=119–132 |doi=10.1037/0096-1523.35.1.119 |pmid=19170475 |issn=1939-1277|url-access=subscription }}</ref> In support of this stage, researchers often refer to patients suffering fromwith [[Balint's syndrome]]. Due to damage in the parietal lobe, these people are unable to focus attention on individual objects. WhenGiven givena stimulistimulus that requires combining features, people suffering fromwith Balint's syndrome are unable to focus attention long enough to combine the features, providing support for this stage of the theory.<ref>{{Cite journal|last1=Cohen|first1=Asher|last2=Rafal|first2=Robert D.|date=1991|title=Attention and Feature Integration: Illusory Conjunctions in a Patient with a Parietal Lobe Lesion|url=http://www.jstor.org/stable/40062648|journal=Psychological Science|volume=2|issue=2|pages=106–110|doi=10.1111/j.1467-9280.1991.tb00109.x |jstor=40062648 |s2cid=145171384 |issn=0956-7976|url-access=subscription}}</ref>
 
[[File:FITstages.png|alt=The stages of Featurefeature Integrationintegration Theory.theory|thumb|300px|The stages of feature integration theory]]
 
Treisman distinguishes between two kinds of visual search tasks, "feature search" and "conjunction search". Feature searches can be performed fast and pre-attentively for targets defined by only one feature, such as color, shape, perceived direction of lighting, movement, or orientation. Features should "pop out" during search and should be able to form [[illusory conjunctions]]. Conversely, conjunction searches occur with the combination of two or more features and are identified serially. Conjunction search is much slower than feature search and requires conscious attention and effort. In multiple experiments, some referenced in this article, Treisman concluded that [[color]], [[Orientation (geometry)|orientation]], and [[intensity (disambiguation)|intensity]] are features for which feature searches may be performed.
 
As a reaction to the Featurefeature Integrationintegration Theorytheory, Wolfe (1994) proposed the Guided Search Model 2.0. According to this model, attention is directed to an object or ___location through a preattentive process. The preattentive process, as Wolfe explains, directs attention in both a bottom-up and top-down way. Information acquired through both bottom-up and top-down processing is ranked according to priority. The priority ranking ''guides'' visual search and makes the search more efficient. Whether the Guided Search Model 2.0 or the Featurefeature Integrationintegration Theorytheory are "correct" theorytheories of visual search is still a hotly debated topic.
Treisman distinguishes between two kinds of visual search tasks, "feature search" and "conjunction search". Feature searches can be performed fast and pre-attentively for targets defined by only one feature, such as color, shape, perceived direction of lighting, movement, or orientation. Features should "pop out" during search and should be able to form illusory conjunctions. Conversely, conjunction searches occur with the combination of two or more features and are identified serially. Conjunction search is much slower than feature search and requires conscious attention and effort. In multiple experiments, some referenced in this article, Treisman concluded that [[color]], [[Orientation (geometry)|orientation]], and [[intensity (disambiguation)|intensity]] are features for which feature searches may be performed.
 
As a reaction to the Feature Integration Theory, Wolfe (1994) proposed the Guided Search Model 2.0. According to this model, attention is directed to an object or ___location through a preattentive process. The preattentive process, as Wolfe explains, directs attention in both a bottom-up and top-down way. Information acquired through both bottom-up and top-down processing is ranked according to priority. The priority ranking ''guides'' visual search and makes the search more efficient. Whether the Guided Search Model 2.0 or the Feature Integration Theory are "correct" theory of visual search is still a hotly debated topic.
 
==Experiments==
<!-- Deleted image removed: [[File:fourshapesexp.png|thumb|alt=An example of four colored shapes and two black letters.|An example of the stimuli found in Treisman et al. (1982).]] -->In order toTo test the notion that attention plays a vital role in visual perception, Treisman and Schmidt (1982) designed an experiment to show that features may exist independently of one another early in processing. Participants were shown a picture involving four objects hidden by two black numbers. The display was flashed for one-fifth of a second followed by a random-dot masking field that appeared on screen to eliminate “any"any residual perception that might remain after the stimuli were turned off”off".<ref>Cognitive Psychology, E. Bruce Goldstein, P 105</ref> Participants were to report the black numbers they saw at each ___location whenwhere the shapes had previously been. The results of this experiment verified Treisman and Schmidt's hypothesis. In 18% of trials, participants reported seeing shapes “made"made up of a combination of features from two different stimuli”stimuli",<ref>Cognitive Psychology, E. Bruce Goldstein, P 105</ref> even when the stimuli had great differences; this is often referred to as an [[illusory conjunction]]. Specifically, illusory conjunctions occur in various situations. For example, you may identify a passing person wearing a red shirt and yellow hat and very quickly transform him or her into one wearing a yellow shirt and red hat. The Featurefeature integration theory provides explanation for illusory conjunctions; because features exist independently of one another during early processing and are not associated with a specific object, they can easily be incorrectly combined both in laboratory settings, as well as in real life situations.<ref>Treisman, A. Cognitive Psychology 12, 97-136 (1980)</ref>
 
As previously mentioned, Balint's syndrome patients have provided support for the Featurefeature Integrationintegration Theorytheory. Particularly, Research participant R.M., awho had [[Bálint's syndrome]] sufferer whoand was unable to focus attention on individual objects, experiences illusory conjunctions when presented with simple stimuli such as a "blue O" or a "red T." In 23% of trials, even when able to view the stimulus for as long as 10 seconds, R.M. reported seeing a "red O" or a "blue T".<ref>Friedman-Hill et al., 1995; Robertson et al., 1997.</ref> This finding is in accordance with feature integration theory's prediction of how one with a lack of focused attention would erroneously combine features.
 
[[File:treismanshapes.png|thumb|alt=The stimuli resembling a carrot, lake and tire, respectively.|The stimuli resembling a carrot, lake and tire, respectively. Treisman et al.(1986).]]

If people use their prior knowledge or experience to perceive an object, they are less likely to make mistakes, or illusory conjunctions. In order toTo explain this phenomenon, Treisman and Souther (1986) conducted an experiment in which they presented three shapes to participants where illusory conjunctions could exist. Surprisingly, when she told participants that they were being shown a carrot, lake, and tire (in place of the orange triangle, blue oval, and black circle, respectively), illusory conjunctions did not exist.<ref>Illusory words: The roles of attention and of top–down constraints in conjoining letters to form words.
By Treisman, Anne; Souther, Janet. Journal of Experimental Psychology: Human Perception and Performance, Vol 12(1), Feb 1986, 3-17.</ref> Treisman maintained that prior-knowledge played an important role in proper perception. Normally, bottom-up processing is used for identifying novel objects; but, once we recall prior knowledge, top-down processing is used. This explains why people are good at identifying familiar objects rather than unfamiliar.
 
Line 26 ⟶ 27:
 
==See also==
 
* [[Attention]]
* [[Binding problem]]
* [[Visual search]]
 
; ==Notes==
<references/>
 
==References==
* Anne Treisman and [[Garry Gelade]] (1980). "A feature-integration theory of attention." ''Cognitive Psychology'', Vol. '''12, No.''' (1), pp.&nbsp;97–136.
* Anne Treisman and [[Hilary Schmidt]] (1982). "Illusory conjunctions in the perception of objects." ''Cognitive Psychology'', Vol. '''14''', pp.&nbsp;107–141.<ref>'''Abstract'''
* Anne Treisman and Janet Souther (1986). "Illusory words: The roles of attention and of top–down constraints in conjoining letters to form words." ''Journal of Experimental Psychology: Human Perception and Performance'', '''12''' (1), pp.&nbsp;3–17
<br>In perceiving objects we may synthesize conjunctions of separable features by directing attention serially to each item in turn (A. Treisman and G. Gelade, ''Cognitive Psychology'', 1980, 12, 97-136). This feature-integration theory predicts that when attention is diverted or overloaded, features may be wrongly recombined, giving rise to "illusory conjunctions." The present paper confirms that illusory conjunctions are frequently experienced among unattended stimuli varying in color and shape, and that they occur also with size and solidity (outlined versus filled-in shapes). They are shown both in verbal recall and in simultaneous and successive matching tasks, making it unlikely that they depend on verbal labeling or on memory failure. They occur as often between stimuli differing on many features as between more similar stimuli, and spatial separation has little effect on their frequency. Each feature seems to be coded as an independent entity and to migrate, when attention is diverted, with few constraints from the other features of its source or destination.</ref>
* Anne Treisman and [[Janet Souther]] (19861988). "Illusory words: The roles of attentionFeatures and ofobjects: top–downthe constraintsfourteenth inBartlett conjoiningMemorial letters to form wordsLecture." ''Quarterly Journal of Experimental Psychology: Human Perception and Performance'', Vol 12(1)'''40A''', pp. 3-17&nbsp;201–236.
* Anne Treisman (1988). "Features and objects: the fourteenth Bartlett Memorial Lecture." ''Quarterly Journal of Experimental Psychology'', 40A, pp.&nbsp;201–236.
*Anne Treisman and [[Nancy Kanwisher]] (1998). "Perceiving visually presented objects: recognition, awareness, and modularity." ''Current Opinion in Neurobiology'', '''8''', pp.&nbsp;218–226.
* J. M. Wolfe (1994). "Guided Search 2.0: A revised model of visual search." ''Psychonomic Bulletin & Review'', Vol '''1''', pp.202-238&nbsp;202–238
 
== External links and references ==
; Notes
<references/>
 
== External links and references ==
* [http://web.mit.edu/bcs/nklab/media/pdfs/TreismanKanwisherCurrOpBio98.pdf 1998 paper by Treisman and Kanwisher at web.mit.edu]
 
Line 48:
[[Category:Cognition]]
[[Category:Human–computer interaction]]
* [[Category:Attention]]
 
 
{{cognitive-psych-stub}}
 
[[de:Merkmalsintegrationstheorie]]
[[he:תאוריית אינטגרציית התכוניות]]
[[ja:特徴統合理論]]
[[ru:Теория интеграции признаков]]
[[zh:特征整合论]]