Content deleted Content added
Citation bot (talk | contribs) Altered template type. Add: isbn, date. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Machine learning | #UCB_Category 95/230 |
Citation bot (talk | contribs) Altered template type. Add: class, eprint. Removed URL that duplicated identifier. Removed access-date with no URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar |
||
(3 intermediate revisions by 3 users not shown) | |||
Line 1:
{{Short description|Technology for sentiment analysis}}
'''Multimodal sentiment analysis''' is a technology for traditional text-based [[sentiment analysis]], which includes [[Modality (human–computer interaction)|modalities]] such as audio and visual data.<ref>{{cite journal |last1=Soleymani |first1=Mohammad |last2=Garcia |first2=David |last3=Jou |first3=Brendan |last4=Schuller |first4=Björn |last5=Chang |first5=Shih-Fu |last6=Pantic |first6=Maja |title=A survey of multimodal sentiment analysis |journal=Image and Vision Computing |date=September 2017 |volume=65 |pages=3–14 |doi=10.1016/j.imavis.2017.08.003|s2cid=19491070 |url=https://zenodo.org/record/3449163 }}</ref> It can be bimodal, which includes different combinations of two modalities, or trimodal, which incorporates three modalities.<ref>{{cite journal |last1=Karray |first1=Fakhreddine |last2=Milad |first2=Alemzadeh |last3=Saleh |first3=Jamil Abou |last4=Mo Nours |first4=Arab |title=Human-Computer Interaction: Overview on State of the Art |journal=International Journal on Smart Sensing and Intelligent Systems |volume=1 |pages=137–159 |date=2008 |url=http://s2is.org/Issues/v1/n1/papers/paper9.pdf|doi=10.21307/ijssis-2017-283 |doi-access=free }}</ref> With the extensive amount of [[social media]] data available online in different forms such as videos and images, the conventional text-based [[sentiment analysis]] has evolved into more complex models of multimodal sentiment analysis,<ref name="s1">{{cite journal |last1=Poria |first1=Soujanya |last2=Cambria |first2=Erik |last3=Bajpai |first3=Rajiv |last4=Hussain |first4=Amir |title=A review of affective computing: From unimodal analysis to multimodal fusion |journal=Information Fusion |date=September 2017 |volume=37 |pages=98–125 |doi=10.1016/j.inffus.2017.02.003|hdl=1893/25490 |s2cid=205433041 |url=http://researchrepository.napier.ac.uk/Output/1792429 |hdl-access=free }}</ref><ref>{{cite arXiv |last1=Nguyen |first1=Quy Hoang |title=New Benchmark Dataset and Fine-Grained Cross-Modal Fusion Framework for Vietnamese Multimodal Aspect-Category Sentiment Analysis |date=2024-05-01 |eprint=2405.00543 |last2=Nguyen |first2=Minh-Van Truong |last3=Van Nguyen |first3=Kiet|class=cs.CL }}</ref> which can be applied in the development of [[virtual assistant]]s,<ref name ="s5">{{cite web |title=Google AI to make phone calls for you |url=https://www.bbc.com/news/technology-44045424 |website=BBC News |access-date=12 June 2018 |date=8 May 2018}}</ref> [[Social media analytics|analysis]] of YouTube movie reviews,<ref name="s4">{{cite journal |last1=Wollmer |first1=Martin |last2=Weninger |first2=Felix |last3=Knaup |first3=Tobias |last4=Schuller |first4=Bjorn |last5=Sun |first5=Congkai |last6=Sagae |first6=Kenji |last7=Morency |first7=Louis-Philippe |title=YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context |journal=IEEE Intelligent Systems |date=May 2013 |volume=28 |issue=3 |pages=46–53 |doi=10.1109/MIS.2013.34|s2cid=12789201 |url=https://opus.bibliothek.uni-augsburg.de/opus4/files/72633/72633.pdf }}</ref> [[Social media analytics|analysis]] of news videos,<ref>{{cite arXiv|last1=Pereira |first1=Moisés H. R. |last2=Pádua |first2=Flávio L. C. |last3=Pereira |first3=Adriano C. M. |last4=Benevenuto |first4=Fabrício |last5=Dalip |first5=Daniel H. |title=Fusing Audio, Textual and Visual Features for Sentiment Analysis of News Videos|date=9 April 2016 |eprint=1604.02612|class=cs.CL }}</ref> and [[emotion recognition]] (sometimes known as [[emotion]] detection) such as [[depression (mood)|depression]] monitoring,<ref name = "s6">{{cite book |last1=Zucco |first1=Chiara |last2=Calabrese |first2=Barbara |last3=Cannataro |first3=Mario |title=2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) |chapter=Sentiment analysis and affective computing for depression monitoring |date=November 2017 |pages=1988–1995 |doi=10.1109/bibm.2017.8217966 |publisher=IEEE |language=en|isbn=978-1-5090-3050-7 |s2cid=24408937 }}</ref> among others.
Similar to the traditional [[sentiment analysis]], one of the most basic task in multimodal sentiment analysis is [[Feeling|sentiment]] classification, which classifies different sentiments into categories such as positive, negative, or neutral.<ref>{{cite book |last1=Pang |first1=Bo |last2=Lee |first2=Lillian |title=Opinion mining and sentiment analysis |date=2008 |publisher=Now Publishers |___location=Hanover, MA |isbn=978-1601981509}}</ref> The complexity of [[Social media analytics|analyzing]] text, audio, and visual features to perform such a task requires the application of different fusion techniques, such as feature-level, decision-level, and hybrid fusion.<ref name="s1" /> The performance of these fusion techniques and the [[classification]] [[algorithm]]s applied, are influenced by the type of textual, audio, and visual features employed in the analysis.<ref name = "s7" />
|