Content deleted Content added
Citation bot (talk | contribs) Add: s2cid. | Use this bot. Report bugs. | Suggested by Abductive | #UCB_webform 1485/2002 |
Citation bot (talk | contribs) Altered template type. Add: class, eprint. Removed URL that duplicated identifier. Removed access-date with no URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar |
||
(12 intermediate revisions by 9 users not shown) | |||
Line 1:
{{Short description|Technology for sentiment analysis}}
'''Multimodal sentiment analysis''' is a
Similar to the traditional [[sentiment analysis]], one of the most basic task in multimodal sentiment analysis is [[Feeling|sentiment]] classification, which classifies different sentiments into categories such as positive, negative, or neutral.<ref>{{cite book |last1=Pang |first1=Bo |last2=Lee |first2=Lillian |title=Opinion mining and sentiment analysis |date=2008 |publisher=Now Publishers |___location=Hanover, MA |isbn=978-1601981509}}</ref> The complexity of [[Social media analytics|analyzing]] text, audio, and visual features to perform such a task requires the application of different fusion techniques, such as feature-level, decision-level, and hybrid fusion.<ref name="s1" /> The performance of these fusion techniques and the [[classification]] [[algorithm]]s applied, are influenced by the type of textual, audio, and visual features employed in the analysis.<ref name = "s7" />
Line 13 ⟶ 14:
=== Visual features ===
One of the main advantages of analyzing videos with respect to texts alone, is the presence of rich sentiment cues in visual data.<ref>{{cite journal |last1=Poria |first1=Soujanya |last2=Cambria |first2=Erik |last3=Hazarika |first3=Devamanyu |last4=Majumder |first4=Navonil |last5=Zadeh |first5=Amir |last6=Morency |first6=Louis-Philippe |title=Context-Dependent Sentiment Analysis in User-Generated Videos |journal=Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |pages=873–883 |date=2017 |doi=10.18653/v1/p17-1081 |doi-access=free }}</ref> Visual features include [[facial expression]]s, which are of paramount importance in capturing sentiments and [[emotion]]s, as they are a main channel of forming a person's present state of mind.<ref name="s1" /> Specifically, [[smile]], is considered to be one of the most predictive visual cues in multimodal sentiment analysis.<ref name="s2" /> OpenFace is an open-source facial analysis toolkit available for extracting and understanding such visual features.<ref>{{cite
== Fusion techniques ==
Line 28 ⟶ 29:
== Applications ==
Similar to text-based sentiment analysis, multimodal sentiment analysis can be applied in the development of different forms of [[recommender system]]s such as in the analysis of user-generated videos of movie reviews<ref name="s4" /> and general product reviews,<ref>{{cite journal |last1=Pérez-Rosas |first1=Verónica |last2=Mihalcea |first2=Rada |last3=Morency |first3=Louis Philippe |title=Utterance-level multimodal sentiment analysis |journal=Long Papers |date=1 January 2013 |url=https://experts.umich.edu/en/publications/utterance-level-multimodal-sentiment-analysis |publisher=Association for Computational Linguistics (ACL)}}</ref> to predict the sentiments of customers, and subsequently create product or service recommendations.<ref>{{cite web |last1=Chui |first1=Michael |last2=Manyika |first2=James |last3=Miremadi |first3=Mehdi |last4=Henke |first4=Nicolaus |last5=Chung |first5=Rita |last6=Nel |first6=Pieter |last7=Malhotra |first7=Sankalp |title=Notes from the AI frontier. Insights from hundreds of use cases |url=https://www.mckinsey.com/mgi/ |website
==References==
|