Hidden Markov model: Difference between revisions

Content deleted Content added
Bender the Bot (talk | contribs)
m Concepts: HTTP to HTTPS for Brown University
Citation bot (talk | contribs)
Add: bibcode, article-number. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 152/990
 
Line 118:
 
=== Statistical significance ===
For some of the above problems, it may also be interesting to ask about [[statistical significance]]. What is the probability that a sequence drawn from some [[null distribution]] will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence?<ref>{{Cite journal |last1=Newberg |first1=L. |doi=10.1186/1471-2105-10-212 |title=Error statistics of hidden Markov model and hidden Boltzmann model results |journal=BMC Bioinformatics |volume=10 |pagesarticle-number=212 |year=2009 |pmid=19589158 |pmc=2722652 |doi-access=free}} {{open access}}</ref> When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the [[false positive rate]] associated with failing to reject the hypothesis for the output sequence.
 
== Learning ==
Line 132:
* [[Computational finance]]<ref>{{cite journal |doi=10.1007/s10614-016-9579-y |volume=49 |issue=4 |title=Parallel Optimization of Sparse Portfolios with AR-HMMs |year=2016 |journal=Computational Economics |pages=563–578 |last1=Sipos |first1=I. Róbert |last2=Ceffer |first2=Attila |last3=Levendovszky |first3=János|s2cid=61882456}}</ref><ref>{{cite journal |doi=10.1016/j.eswa.2016.01.015 |volume=53 |title=A novel corporate credit rating system based on Student's-t hidden Markov models |year=2016 |journal=Expert Systems with Applications |pages=87–105 |last1=Petropoulos |first1=Anastasios |last2=Chatzis |first2=Sotirios P. |last3=Xanthopoulos |first3=Stylianos}}</ref>
* [[Single-molecule experiment|Single-molecule kinetic analysis]]<ref>{{cite journal |last1=Nicolai |first1=Christopher |date=2013 |doi=10.1142/S1793048013300053 |title=Solving Ion Channel Kinetics with the QuB Software |journal=Biophysical Reviews and Letters |volume=8 |issue=3n04 |pages=191–211}}</ref>
* [[Neuroscience]]<ref>{{cite journal |doi=10.1002/hbm.25835 |title=Spatiotemporally Resolved Multivariate Pattern Analysis for M/EEG |journal=Human Brain Mapping |date=2022 |last1=Higgins |first1=Cameron |last2=Vidaurre |first2=Diego |last3=Kolling |first3=Nils |last4=Liu |first4=Yunzhe |last5=Behrens |first5=Tim |last6=Woolrich |first6=Mark |volume=43 |issue=10 |pages=3062–3085 |pmid=35302683 |pmc=9188977}}</ref><ref>{{Cite journal |last1=Diomedi |first1=S. |last2=Vaccari |first2=F. E. |last3=Galletti |first3=C. |last4=Hadjidimitrakis |first4=K. |last5=Fattori |first5=P. |date=2021-10-01 |title=Motor-like neural dynamics in two parietal areas during arm reaching |url=https://www.sciencedirect.com/science/article/pii/S0301008221001301 |journal=Progress in Neurobiology |language=en |volume=205 |pagesarticle-number=102116 |doi=10.1016/j.pneurobio.2021.102116 |pmid=34217822 |issn=0301-0082|hdl=11585/834094 |s2cid=235703641 |hdl-access=free}}</ref>
* [[Cryptanalysis]]
* [[Speech recognition]], including [[Siri]]<ref>{{cite book|last1=Domingos|first1=Pedro|title=The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World|url=https://archive.org/details/masteralgorithmh0000domi|url-access=registration|date=2015|publisher=Basic Books|isbn=9780465061921|page=[https://archive.org/details/masteralgorithmh0000domi/page/37 37]|language=en}}</ref>
Line 163:
In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a [[categorical distribution]]) or continuous (typically from a [[Gaussian distribution]]). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a [[linear dynamical system]], with a linear relationship among related variables and where all hidden and observed variables follow a [[Gaussian distribution]]. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the [[Kalman filter]]); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the [[extended Kalman filter]] or the [[particle filter]].
 
Nowadays, inference in hidden Markov models is performed in [[Nonparametric statistics|nonparametric]] settings, where the dependency structure enables [[identifiability]] of the model<ref>{{Cite journal |last1=Gassiat |first1=E. |last2=Cleynen |first2=A. |last3=Robin |first3=S. |date=2016-01-01 |title=Inference in finite state space non parametric Hidden Markov Models and applications |url=https://doi.org/10.1007/s11222-014-9523-8 |journal=Statistics and Computing |language=en |volume=26 |issue=1 |pages=61–71 |doi=10.1007/s11222-014-9523-8 |issn=1573-1375|url-access=subscription }}</ref> and the learnability limits are still under exploration.<ref>{{Cite journal |last1=Abraham |first1=Kweku |last2=Gassiat |first2=Elisabeth |last3=Naulet |first3=Zacharie |date=March 2023 |title=Fundamental Limits for Learning Hidden Markov Model Parameters |url=https://ieeexplore.ieee.org/document/9917566 |journal=IEEE Transactions on Information Theory |volume=69 |issue=3 |pages=1777–1794 |doi=10.1109/TIT.2022.3213429 |arxiv=2106.12936 |bibcode=2023ITIT...69.1777A |issn=0018-9448}}</ref>
 
=== Bayesian modeling of the transitions probabilities ===