Recursive Bayesian estimation: Difference between revisions

Content deleted Content added
m Model: +.
Model: \textbf{Z} --> \textbf{z} for consistency
Line 27:
This leads to the ''predict'' and ''update'' steps of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (''k'' - 1)-th timestep to the ''k''-th and the probability distribution associated with the previous state, over all possible <math>x_{k_-1}</math>.
 
:<math> p(\textbf{x}_k|\textbf{Zz}_{k-1}) = \int p(\textbf{x}_k | \textbf{x}_{k-1}) p(\textbf{x}_{k-1} | \textbf{Zz}_{k-1} ) \, d\textbf{x}_{k-1} </math>
 
The probability distribution of update is proportional to the product of the measurement likelihood and the predicted state.
:<math> p(\textbf{x}_k|\textbf{Zz}_{k}) = \frac{p(\textbf{z}_k|\textbf{x}_k) p(\textbf{x}_k|\textbf{Zz}_{k-1})}{p(\textbf{z}_k|\textbf{Zz}_{k-1})}
= \alpha\,p(\textbf{z}_k|\textbf{x}_k) p(\textbf{x}_k|\textbf{Zz}_{k-1})
</math>
 
The denominator
:<math>p(\textbf{z}_k|\textbf{Zz}_{k-1}) = \int p(\textbf{z}_k|\textbf{x}_k) p(\textbf{x}_k|\textbf{Zz}_{k-1}) d\textbf{x}_k</math>
is constant relative to <math>x</math>, so we can always substitute it for a coefficient <math>\alpha</math>, which can usually be ignored in practice. The numerator can be calculated and then simply normalized, since its integral must be unitary.