Sentence processing: Difference between revisions

Content deleted Content added
m removed stray ref tag
Citation bot (talk | contribs)
m Alter: url, isbn. Add: pmid, citeseerx, issue. | You can use this bot yourself. Report bugs here. | User-activated.
Line 23:
A modular view of sentence processing assumes that each factor involved in sentence processing is computed in its own module, which has limited means of communication with the other modules. For example, syntactic analysis creation takes place without input from semantic analysis or context-dependent information, which are processed separately. A common assumption of modular accounts is a ''feed-forward'' architecture in which the output of one processing step is passed on to the next step without feedback mechanisms that would allow the output of the first module to be corrected. Syntactic processing is usually taken to be the most basic analysis step, which feeds into semantic processing and the inclusion of other information. A separate mental module parses sentences and lexical access happens first. Then, one syntactic hypothesis is considered at a time. There is no initial influence of meaning, or semantic. Sentence processing is supported by a temporo-frontal network. Within the network, temporal regions subserve aspects of identification and frontal regions the building of syntactic and semantic relations. Temporal analyses of brain activation within this network support syntax-first models because they reveal that building of syntactic structure precedes semantic processes and that these interact only during a later stage.<ref>{{cite journal|last1=Friederici|first1=Angela|title=Towards a neural basis of auditory sentence processing|journal=Trends in Cognitive Sciences|date=1 February 2002|volume=6|issue=2|pages=78–84|doi=10.1016/S1364-6613(00)01839-8|url=http://www.sciencedirect.com/science/article/pii/S1364661300018398|accessdate=2015-12-05}}</ref>
 
Interactive accounts assume that all available information is processed at the same time and can immediately influence the computation of the final analysis. In the interactive model of sentence processing, there is no separate module for parsing. Lexical access, syntactic structure assignment, and meaning assignment happen at the same time in parallel. Several syntactic hypotheses can be considered at a time. The interactive model demonstrates an on-line interaction between the structural and lexical and phonetic levels of sentence processing. Each word, as it is heard in the context of normal discourse, is immediately entered into the processing system at all levels of description, and is simultaneously analyzed at all these levels in the light of whatever information is available at each level at that point in the processing of the sentence.<ref>{{cite journal|last1=Abrahams|first1=V. C.|last2=Rose|first2=P. K.|title=Sentence perception as an interactive parallel process|date=18 July 1975|volume=189|pages=226–228|url=https://www.researchgate.net/profile/William_Marslen-Wilson/publication/6106384_Sentence_perception_as_an_interactive_parallel_process/links/02e7e530c94879601e000000.pdf6106384}}</ref> Interactive models of language processing assume that information flows both bottom-up and top-down, so that the representations formed at each level may be influenced by higher as well as lower levels. A framework called the interactive activation framework that embeds this key assumption among others, including the assumption that influences from different sources are combined nonlinearly. The nonlinearity means that information that may be decisive under some circumstances may have little or no effect under other conditions. In the interactive activation framework, the knowledge that guides processing is stored in the connections between units on the same and adjacent levels. The processing units that they connect may receive input from a number of different sources, which allows the knowledge that guides processing to be completely local while, at the same time, allowing the results of processing at one level to influence processing at other levels, both above and below. A basic assumption of the framework is that processing interactions are always reciprocal; it is this bi-directional characteristic that makes the system interactive. Bi-directional excitatory interactions between levels allow mutual simultaneous constraint among adjacent levels, and bi-diectional inhibitory interactions within a level allow for competition among mutually incompatible interpretations of a portion of an input. The between-level excitatory interactions are captured in the models in two-way excitatory connections between mutually compatible processing units.<ref>(McClelland)</ref> Syntactic ambiguities are in fact based at the lexical level. In addition, more recent studies with more sensitive eyetracking machines have shown early context effects. Frequency and contextual information will modulate the activation of alternatives even when they are resolved in favor of the simple interpretation. Structural simplicity is cofounded with frequency, which goes against the garden path theory<ref>MacDonald, Pearlmutter & Seidenberg, 1994).</ref>
 
==== Serial vs. parallel ====
Line 39:
 
==== Constraint-based model ====
[[Constraint-based grammar|Constraint-based]] theories of language comprehension<ref>{{cite journal|last=MacDonald|first=M. C.|author2=Pearlmutter, M. |author3=Seidenberg, M. |title=The Lexical Nature of Ambiguity Resolution|journal=Psychological Review|year=1994|volume=101|issue=4|pages=676–703|pmid=7984711|doi=10.1037/0033-295x.101.4.676}}</ref> emphasize how people make use of the vast amount of probabilistic information available in the linguistic signal. Through [[statistical learning]],<ref>{{cite journal |last=Seidenberg |first=Mark S. |author2=J.L. McClelland |year=1989 |title=A distributed developmental model of word recognition and naming. |journal=Psychological Review |volume=96 |pages=523–568 |pmid=2798649|doi=10.1037/0033-295X.96.4.523 |issue=4 |citeseerx=10.1.1.127.3083 }}</ref> the frequencies and distribution of events in linguistic environments can be picked upon, which inform language comprehension. As such, language users are said to arrive at a particular interpretation over another during the comprehension of an ambiguous sentence by rapidly integrating these probabilistic constraints.
 
==== Good enough theory ====
Line 51:
In behavioral studies, subjects are often presented with linguistic stimuli and asked to perform an action. For example, they may be asked to make a judgment about a word ([[lexical decision task|lexical decision]]), reproduce the stimulus, or name a visually presented word aloud. Speed (often reaction time: time taken to respond to the stimulus) and accuracy (proportion of correct responses) are commonly employed measures of performance in behavioral tasks. Researchers infer that the nature of the underlying process(es) required by the task gives rise to differences; slower rates and lower accuracy on these tasks are taken as measures of increased difficulty. An important component of any behavioral task is that it stays relatively true to 'normal' language comprehension—the ability to generalize the results of any task is restricted when the task has little in common with how people actually encounter language.
 
A common behavioral paradigm involves [[priming (psychology)|priming effects]], wherein participants are presented first with a prime and then with a target word. The response time for the target word is affected by the relationship between the prime and the target. For example, Fischler (1977) investigated word encoding using the lexical decision task. She asked participants to make decisions about whether two strings of letters were English words. Sometimes the strings would be actual English words requiring a "yes" response, and other times they would be nonwords requiring a "no" response. A subset of the licit words were related semantically (e.g., cat-dog) while others were unrelated (e.g., bread-stem). Fischler found that related word pairs were responded to faster when compared to unrelated word pairs, which suggests that semantic relatedness can facilitate word encoding.<ref>{{cite journal | title=Semantic facilitation without association in a lexical decision task | author=Fischler I. | journal=Memory & Cognition |volume=5 | issue=3 | pages=335–339 | year=1977 | doi=10.3758/bf03197580| pmid=24202904 }}</ref>
 
===Eye-movements===
[[Eye tracking]] has been used to study online language processing. This method has been influential in informing knowledge of reading.<ref>{{cite journal | author=Rayner K. | title=Eye movements in reading and information processing |journal=Psychological Bulletin | year=1978 | volume=85 |pages=618–660 | doi=10.1037/0033-2909.85.3.618 | pmid=353867 | issue=3| citeseerx=10.1.1.294.4262 }}</ref> Additionally, Tanenhaus et al. (1995)<ref>{{cite journal |author1=Tanenhaus M. K. |author2=Spivey-Knowlton M. J. |author3=Eberhard K. M. |author4=Sedivy J. E. | year=1995 | title=Integration of visual and linguistic information in spoken language comprehension|journal=Science |volume=268 |pages=1632–1634 | doi=10.1126/science.7777863 | pmid=7777863 | issue=5217}}</ref> established the visual world paradigm, which takes advantage of eye movements to study online spoken language processing. This area of research capitalizes on the linking hypothesis that eye movements are closely linked to the current focus of attention.
 
===Neuroimaging and evoked potentials===
Line 85:
| year = 1987
}}
*{{cite book | title=Sentence Comprehension: The Integration of Habits and Rules | last=Townsend | first=David J |author2=Thomas G. Bever | year=2001 | publisher=[[MIT Press]] | url=https://books.google.com/books?id=Vs31TzBbqIIC&pg=PA382&lpg=PA382&dq=early+left+anterior+negativity&source=web&ots=k0jVegTTDb&sig=QYZL3lg_tyziAapiRzbQ-CUYWhk#PPA382,M1 | page=382 | isbn=978-0-262-70080-81}}
*{{Citation
| last = Lewis