List of facial expression databases: Difference between revisions

Content deleted Content added
m Me, Myself, and I are Here moved page Facial Expression Databases to Facial expression databases: Titles should follow sentence case and use the singular
m ref section, punct., c/e
Line 1:
Well-annotated (emotion -tagged) media content of facial behavior is essential for training, testing, and validation of algorithms for the development of expression recognition systems. The emotion annotation can be done in discrete emotion labels or on a continuous scale. Most of the databases are usually based on the Basic Emotions theory (by  [[Paul Ekman]]), which assumes the existence of six discrete basic emotions (anger, fear, disgust, surprise, joy, sadness). However, some databases include the emotion tagging in continuous arousal-valence scale. And some databases include the AU activations based on FACS.
 
In posed expression databases, the participants are asked to display different basic emotional expressions, while in spontaneous expression database, the expressions are natural. Spontaneous expressions differ from posed ones remarkably in terms of intensity, configuration, and duration. Apart from this, synthesis of some AUs are barely achievable without undergoing the associated emotional state. Therefore, in most cases, the posed expressions are exaggerated, while the spontaneous ones are subtle and differ in appearance.
 
Many publicly available databases are categorized in here.<ref>{{Cite web|url=http://emotion-research.net/wiki/Databases|title=collection of emotional databases|last=|first=|date=|website=|publisher=|access-date=}}</ref><ref>{{Cite web|url=https://www.ecse.rpi.edu/~cvrl/database/other_facial_expression.htm|title=facial expression databases|last=|first=|date=|website=|publisher=|access-date=}}</ref>. Some of the database contents are provided in the following table.
{| class="wikitable"
!Database
Line 16:
|Extended Cohn-Kanade Dataset (CK+)<ref>P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews, "The Extended Cohn-Kanade Dataset (CK+): A complete facial expression dataset for action unit and emotion-specified expression," in ''3rd IEEE Workshop on CVPR for Human Communicative Behavior Analysis'', 2010</ref>
|neutral, sadness, surprise, happiness, fear, anger, and disgust
|123 &nbsp;
|593 image sequences (327 sequences having discrete emotion labels)
|Mostly gray
Line 64:
|
|-
|DISFA <ref>S. M. Mavadati, M. H. Mahoor, K. Bartlett, P. Trinh and J. Cohn., "DISFA: A Spontaneous Facial Action Intensity Database," ''IEEE Trans. Affective Computing,'' vol. 4, no. 2, pp. 151 - 160151–160, 2013</ref>
| -
|27
Line 73:
|Spontaneous
|}
 
==References==
{{reflist}}