Content deleted Content added
mNo edit summary |
|||
Line 1:
The '''factored language model''' ('''FLM''') is an extension of a conventional [[language model]] introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of ''k'' factors: <math>w_i = \{f_i^1, ..., f_i^k\}.</math> An FLM provides the probabilistic model <math>P(f|f_1, ..., f_N)</math> where the prediction of a factor <math>f</math> is based on <math>N</math> parents <math>\{f_1, ..., f_N\}</math>. For example, if <math>w</math> represents a word token and <math>t</math> represents a [[Part of speech]] tag for English, the expression <math>P(w_i|w_{i-2}, w_{i-1}, t_{i-1})</math> gives a model for predicting current word token based on a traditional [[Ngram]] model as well as the [[Part of speech]] tag of the previous word.
A major advantage of factored language models is that they allow users to specify linguistic knowledge such as the relationship between word tokens and [[Part of speech]] in English, or morphological information (stems, root, etc.) in Arabic.
|