Multimodal sentiment analysis: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
m Add: chapter. You can use this bot yourself. Report bugs here.
ce
Line 11:
=== Audio Features ===
 
[[Feeling|Sentiment]] and [[emotion]] characteristics are prominent in different [[phonetic]] and [[prosodic]] properties contained in audio features.<ref>{{cite journal |last1=Chung-Hsien Wu |last2=Wei-Bin Liang |title=Emotion Recognition of Affective Speech Based on Multiple Classifiers Using Acoustic-Prosodic Information and Semantic Labels |journal=IEEE Transactions on Affective Computing |date=January 2011 |volume=2 |issue=1 |pages=10–21 |doi=10.1109/T-AFFC.2010.16}}</ref> Some of the most important audio features employed in multimodal sentiment analysis are [[mel-frequency cepstrum| mel-frequency cepstrum (MFCC)]], [[spectral centroid]], [[spectral flux]], [[beat]]{{disambiguation needed|date=June 2018}} histogram, beat sum, strongest beat, pause duration, and [[pitch accent|pitch]].<ref name="s1" /> [[OpenSMILE]]<ref>{{cite journalbook |last1=Eyben |first1=Florian |last2=Wöllmer |first2=Martin |last3=Schuller |first3=Björn |title=OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit - IEEE Conference Publication |journal= |pages=1 |date=2009 |doi=10.1109/ACII.2009.5349350 |url=http://ieeexplore.ieee.org/document/5349350|isbn=978-1-4244-4800-5 |chapter=OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit }}</ref> and [[Praat]] are popular open-source toolkits for extracting such audio features.<ref>{{cite journal book|last1=Morency |first1=Louis-Philippe |last2=Mihalcea |first2=Rada |last3=Doshi |first3=Payal |title=Towards multimodal sentiment analysis: harvesting opinions from the web |date=14 November 2011 |pages=169–176 |doi=10.1145/2070481.2070509 |url=https://dl.acm.org/citation.cfm?id=2070509 |publisher=ACM|chapter=Towards multimodal sentiment analysis |isbn=9781450306416 }}</ref>
 
=== Visual Features ===
Line 35:
== Applications ==
 
Similar to text-based sentiment analysis, multimodal sentiment analysis can be applied in the development of different forms of [[recommender system]]s such as in the analysis of user-generated videos of movie reviews<ref name="s4" /> and general product reviews,<ref>{{cite journal |last1=Pérez-Rosas |first1=Verónica |last2=Mihalcea |first2=Rada |last3=Morency |first3=Louis Philippe |title=Utterance-level multimodal sentiment analysis |journal=Long Papers |date=1 January 2013 |url=https://experts.umich.edu/en/publications/utterance-level-multimodal-sentiment-analysis |publisher=Association for Computational Linguistics (ACL)}}</ref> to predict the sentiments of customers, and subsequently create product or service recommendations.<ref>{{cite web |last1=Chui |first1=Michael |last2=Manyika |first2=James |last3=Miremadi |first3=Mehdi |last4=Henke |first4=Nicolaus |last5=Chung |first5=Rita |last6=Nel |first6=Pieter |last7=Malhotra |first7=Sankalp |title=Notes from the AI frontier. Insights from hundreds of use cases |url=https://www.mckinsey.com/mgi/ |website=McKinsey & Company |publisher=McKinsey & Company |accessdate=13 June 2018 |language=en}}</ref> Multimodal sentiment analysis also plays an important role in the advancement of [[virtual assistant]]s through the application of [[natural language processing (NLP) and [[machine learning]] techniques.<ref name ="s5" /> In the healthcare ___domain, multimodal sentiment analysis]] can be utilized to detect certain medical conditions such as [[Psychological stress|stress]], [[anxiety]], or [[Depression (mood)|depression]].<ref name = "s6" /> Multimodal sentiment analysis can also be applied in understanding the sentiments contained in video news programs, which is considered as a complicated and challenging ___domain, as sentiments expressed by reporters tend to be less obvious or neutral.<ref>{{cite journal book|last1=Ellis |first1=Joseph G. |last2=Jou |first2=Brendan |last3=Chang |first3=Shih-Fu |title=Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video News |date=12 November 2014 |pages=104–111 |doi=10.1145/2663204.2663237 |url=https://dl.acm.org/citation.cfm?doid=2663204.2663237 |publisher=ACM|chapter=Why We Watch the News |isbn=9781450328852 }}</ref>
 
==References==