Content deleted Content added
m Fixing the ___location of periods / full stops |
Citation bot (talk | contribs) Add: s2cid. | You can use this bot yourself. Report bugs here. | Suggested by SemperIocundus | via #UCB_webform |
||
Line 53:
==Hierarchical Markov models==
Hierarchical Markov models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's ___location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the [[Hierarchical hidden Markov model]]<ref name="HHMM">{{cite journal |first1=S. |last1=Fine |first2=Y. |last2=Singer |title=The hierarchical hidden markov model: Analysis and applications |journal=Machine Learning |volume=32 |issue=1 |pages=41–62 |year=1998 |doi=10.1023/A:1007469218079|doi-access=free }}</ref> and the Abstract Hidden Markov Model.<ref name="AHMM">{{cite journal |first1=H. H. |last1=Bui |first2=S. |last2=Venkatesh |first3=G. |last3=West |url=https://www.jair.org/index.php/jair/article/view/10316 |title=Policy recognition in the abstract hidden markov model |journal=Journal of Artificial Intelligence Research |volume=17 |pages=451–499 |year=2002 |doi=10.1613/jair.839|doi-access=free }}</ref> Both have been used for behavior recognition.<ref name="HierarchicalLearningAndPlanningInPOMDPs">{{cite thesis |first=G. |last=Theocharous |url=http://dl.acm.org/citation.cfm?id=936140 |title=Hierarchical Learning and Planning in Partially Observable Markov Decision Processes |type=PhD |publisher=Michigan State University |year=2002}}</ref> and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.<ref name="AHMM" /><ref name="RecognitionOfHumanActivityThroughHierarchicalStochasticLearning">{{cite book |first1=S. |last1=Luhr |first2=H. H. |last2=Bui |first3=S. |last3=Venkatesh |first4=G. A. W. |last4=West |chapter-url=http://dl.acm.org/citation.cfm?id=826390 |chapter=Recognition of Human Activity through Hierarchical Stochastic Learning |title=PERCOM '03 Proceedings of the First IEEE International Conference on Pervasive Computing and Communications |pages=416–422 |year=2003 |doi=10.1109/PERCOM.2003.1192766|isbn=978-0-7695-1893-0 |citeseerx=10.1.1.323.928 |s2cid=13938580 }}</ref>
==Tolerant Markov model==
A Tolerant Markov model (TMM) is a probabilistic-algorithmic Markov chain model.<ref name="TMMs">{{cite book |first1=D. |last1=Pratas |first2=M. |last2=Hosseini |first3=A. J. |last3=Pinho |chapter=Substitutional tolerant Markov models for relative compression of DNA sequences |title=PACBB 2017 – 11th International Conference on Practical Applications of Computational Biology & Bioinformatics, Porto, Portugal |pages=265–272 |year=2017 |doi=10.1007/978-3-319-60816-7_32 |isbn=978-3-319-60815-0}}</ref> It assigns the probabilities according to a conditioning context that considers the last symbol, from the sequence to occur, as the most probable instead of the true occurring symbol. A TMM can model three different natures: substitutions, additions or deletions. Successful applications have been efficiently implemented in DNA sequences compression.<ref name="TMMs" /><ref name="GECO">{{cite book |first1=D. |last1=Pratas |first2=A. J. |last2=Pinho |first3=P. J. S. G. |last3=Ferreira |chapter=Efficient compression of genomic sequences |title=Data Compression Conference (DCC), 2016 |pages=231–240 |publisher=IEEE |year=2016 |doi=10.1109/DCC.2016.60|isbn=978-1-5090-1853-6 |s2cid=14230416 }}</ref>
==Markov-chain forecasting models==
|