Hidden Markov model: Difference between revisions

Content deleted Content added
m I have inserted citation in Neuroscience Applications
Inlin (talk | contribs)
m Filtering: I added an example for when the forward algorithm could be applied and fixed some grammatical errors in the text.
Line 103:
 
==== Filtering ====
The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute <math>P(x(t)\ |\ y(1),\dots,y(t))</math>. This task is normally used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points ofin time, with corresponding observations at each point in time. Then, it is natural to ask about the state of the process at the end.
 
This problem can be handled efficiently using the [[forward algorithm]]. An example is when the algorithm is applied to a Hidden Markov Network to determine <math>\mathrm{P}\big( h_t \ | v_{1:t} \big)</math>.
 
==== Smoothing ====