Content deleted Content added
m punct. |
m Open access bot: url-access updated in citation with #oabot. |
||
(20 intermediate revisions by 14 users not shown) | |||
Line 1:
{{Short description|Theory of human visual attention}}
'''Feature integration theory''' is a theory of [[attention]] developed in 1980 by [[Anne Treisman]] and
==Stages==
According to Treisman, the first stage of the feature integration theory is the preattentive stage.
The second stage of
[[File:FITstages.png|alt=The stages of feature integration theory|thumb|300px|The stages of feature integration theory]]
Treisman distinguishes between two kinds of visual search tasks, "feature search" and "conjunction search". Feature searches can be performed fast and pre-attentively for targets defined by only one feature, such as color, shape, perceived direction of lighting, movement, or orientation. Features should "pop out" during search and should be able to form [[illusory conjunctions]]. Conversely, conjunction searches occur with the combination of two or more features and are identified serially. Conjunction search is much slower than feature search and requires conscious attention and effort. In multiple experiments, some referenced in this article, Treisman concluded that [[color]], [[Orientation (geometry)|orientation]], and [[intensity (disambiguation)|intensity]] are features for which feature searches may be performed.
As a reaction to the feature integration theory, Wolfe (1994) proposed the Guided Search Model 2.0. According to this model, attention is directed to an object or ___location through a preattentive process. The preattentive process, as Wolfe explains, directs attention in both a bottom-up and top-down way. Information acquired through both bottom-up and top-down processing is ranked according to priority. The priority ranking ''guides'' visual search and makes the search more efficient. Whether the Guided Search Model 2.0 or the feature integration theory are "correct" theories of visual search is still a hotly debated topic.
==Experiments==
<!-- Deleted image removed: [[File:fourshapesexp.png|thumb|alt=An example of four colored shapes and two black letters.|An example of the stimuli found in Treisman et al. (1982).]] -->
As previously mentioned, Balint's syndrome patients have provided support for the feature integration theory. Particularly, Research participant R.M.,
[[File:treismanshapes.png|thumb|alt=The stimuli resembling a carrot, lake and tire, respectively.|The stimuli resembling a carrot, lake and tire, respectively. Treisman et al.(1986).]]
If people use their prior knowledge or experience to perceive an object, they are less likely to make mistakes, or illusory conjunctions. By Treisman, Anne; Souther, Janet. Journal of Experimental Psychology: Human Perception and Performance, Vol 12(1), Feb 1986, 3-17.</ref> Treisman maintained that prior-knowledge played an important role in proper perception. Normally, bottom-up processing is used for identifying novel objects; but, once we recall prior knowledge, top-down processing is used. This explains why people are good at identifying familiar objects rather than unfamiliar.
Line 25 ⟶ 27:
==See also==
* [[Attention]]▼
* [[Binding problem]]
* [[Visual search]]
<references/>▼
==References==
* Anne Treisman and
* Anne Treisman and
* Anne Treisman and Janet Souther (1986). "Illusory words: The roles of attention and of top–down constraints in conjoining letters to form words." ''Journal of Experimental Psychology: Human Perception and Performance'', '''12''' (1), pp. 3–17
* Anne Treisman
*Anne Treisman and [[Nancy Kanwisher]] (1998). "Perceiving visually presented objects: recognition, awareness, and modularity." ''Current Opinion in Neurobiology'', '''8''', pp. 218–226.
* J. M. Wolfe (1994). "Guided Search 2.0: A revised model of visual search." ''Psychonomic Bulletin & Review'',
▲===Notes===
▲<references/>
== External links
* [http://web.mit.edu/bcs/nklab/media/pdfs/TreismanKanwisherCurrOpBio98.pdf 1998 paper by Treisman and Kanwisher at web.mit.edu]
Line 47 ⟶ 48:
[[Category:Cognition]]
[[Category:Human–computer interaction]]
|