Multimodal sentiment analysis: Difference between revisions

Content deleted Content added
Undid revision 1179752995 by 114.10.7.156 (talk) rv sock/serial horseplay
Citation bot (talk | contribs)
Altered template type. Add: isbn, date. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Machine learning | #UCB_Category 95/230
Line 14:
 
=== Visual features ===
One of the main advantages of analyzing videos with respect to texts alone, is the presence of rich sentiment cues in visual data.<ref>{{cite journal |last1=Poria |first1=Soujanya |last2=Cambria |first2=Erik |last3=Hazarika |first3=Devamanyu |last4=Majumder |first4=Navonil |last5=Zadeh |first5=Amir |last6=Morency |first6=Louis-Philippe |title=Context-Dependent Sentiment Analysis in User-Generated Videos |journal=Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |pages=873–883 |date=2017 |doi=10.18653/v1/p17-1081 |doi-access=free }}</ref> Visual features include [[facial expression]]s, which are of paramount importance in capturing sentiments and [[emotion]]s, as they are a main channel of forming a person's present state of mind.<ref name="s1" /> Specifically, [[smile]], is considered to be one of the most predictive visual cues in multimodal sentiment analysis.<ref name="s2" /> OpenFace is an open-source facial analysis toolkit available for extracting and understanding such visual features.<ref>{{cite journalbook |title=OpenFace: An open source facial behavior analysis toolkit - IEEE Conference Publication |date= March 2016|doi= 10.1109/WACV.2016.7477553|isbn= 978-1-5090-0641-0|s2cid= 1919851|url= https://www.repository.cam.ac.uk/handle/1810/280724}}</ref>
 
== Fusion techniques ==