Content deleted Content added
corrections to English |
No edit summary |
||
(10 intermediate revisions by 10 users not shown) | |||
Line 1:
The '''factored language model''' ('''FLM''') is an extension of a conventional [[language model]] introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of ''k'' factors: <math>w_i = \{f_i^1, ..., f_i^k\}.</math>
A major advantage of factored language models is that they allow users to specify linguistic knowledge such as the relationship between word tokens and [[Part of speech]] in English, or morphological information (stems, root, etc.) in Arabic.
Line 6:
==References==
*{{cite conference | author=J Bilmes and K Kirchhoff | url=http://ssli.ee.washington.edu/people/bilmes/mypapers/hlt03.pdf | title=Factored Language Models and Generalized Parallel Backoff |
[[Category:Language modeling]]
[[Category:Statistical natural language processing]]
[[Category:Probabilistic models]]
{{
|