Content deleted Content added
Line 7:
Let <math>A</math> be a state space (finite alphabet) of size <nowiki>|A|</nowiki>.
Consider a sequence with the [[Markov chain|Markov property]] <math>x_1^{n}=x_1x_2...x_n</math> of <math>n</math> realizations of [[random
Given a training set of observed states, <math>x_1^{n}</math>, the construction algorithm of the VOM models learns a model <math>P</math> that provides a [[probability]] assignment for each state in the sequence given its past (previously observed symbols) or future states.
Specifically, the learner generates a conditional probability distribution <math>P(x|s)</math> for a symbol <math>x_i \in A</math> given a context <math>s\in A^*</math>, where the * sign represents a sequence of states of any length, including the empty context.
|