The '''factored language model''' ('''FLM''') is an extension of a conventional [[Languagelanguage model]] introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of ''k'' factors: <math>w_i = \{f_i^1, ..., f_i^k\}.</math>. An FLM provides the probabilistic model <math>P(f|f_if_1, ..., f_N)</math> where the prediction of a factor <math>f</math> is based on <math>N</math> parents <math>\{f_1, ..., f_N\}</math>. For an example, if <math>w</math> represents a word token and <math>t</math> represents a [[Part of speech]] tag for English, the modelexpression <math>P(w_i|w_{i-2}, w_{i-1}, t_{i-1})</math> gives a model for predicting current workword token based on a traditional [[Ngram]] model as well as the [[Part of speech]] tag of the previous word.
A mainmajor advantage of factored language models is that they allow users to put inspecify linguistic knowledge such as explicitly model the relationship between word tokens and [[Part of speech]] in English, or morphological information (stems, root, etc.) in Arabic.
Like [[N-gram]] models, smoothing techniques are necessary in parameter estimation. In particular, generalized backingback-off is used in training an FLM.
==References==
*{{Conferencecite referenceconference | Authorauthor=J Bilmes and K Kirchhoff | Titleurl=[http://ssli.ee.washington.edu/people/bilmes/mypapers/hlt03.pdf | title=Factored Language Models and Generalized Parallel Backoff] | Booktitlebook-title=Human Language Technology Conference | Pagesyear=2003 | Yeararchive-url=2003https://web.archive.org/web/20120717075838/http://ssli.ee.washington.edu/people/bilmes/mypapers/hlt03.pdf | archive-date=17 July 2012}}▼
[[Category:Language modeling]]
▲*{{Conference reference | Author=J Bilmes and K Kirchhoff | Title=[http://ssli.ee.washington.edu/people/bilmes/mypapers/hlt03.pdf Factored Language Models and Generalized Parallel Backoff] | Booktitle=Human Language Technology Conference | Pages= | Year=2003}}
[[Category:NaturalStatistical natural language processing]]▼