List of facial expression databases: Difference between revisions

Content deleted Content added
Monkbot (talk | contribs)
m top: Task 16: replaced (1×) / removed (1×) deprecated |dead-url= and |deadurl= with |url-status=;
rm coi / selfpromotion
Line 1:
A '''facial expression database''' is a collection of images or video clips with [[facial expression]]s of a range of [[emotions]].
Well-annotated ([[emotion]]-tagged) media content of facial behavior is essential for training, testing, and validation of [[algorithm]]s for the development of [[Emotion recognition|expression recognition systems]]. The emotion annotation can be done in [[discrete emotion theory|discrete emotion]] labels or on a continuous scale. Most of the databases are usually based on the [[basic emotions]] theory (by&nbsp;[[Paul Ekman]] and [[Armindo Freitas-Magalhaes]]) which assumes the existence of six discrete basic emotions (anger, fear, disgust, surprise, joy, sadness). However, some databases include the emotion tagging in continuous arousal-valence scale. And some databases include the AU activations based on FACS <ref>Freitas-Magalhães, A. (2018). Facial Action Coding System 3.0: Manual of Scientific Codification of the Human Face (english edition). Porto: FEELab Science Books. {{ISBN|978-989-8766-89-2}}</ref>.
 
In posed expression databases, the participants are asked to display different basic emotional expressions, while in spontaneous expression database, the expressions are natural. Spontaneous expressions differ from posed ones remarkably in terms of intensity, configuration, and duration. Apart from this, synthesis of some AUs are barely achievable without undergoing the associated emotional state. Therefore, in most cases, the posed expressions are exaggerated, while the spontaneous ones are subtle and differ in appearance.
Line 29:
Ratings provided by 319 human raters
|Posed
|-
|F-M FACS 3.0 (EDU, PRO & XYZ versions) <ref>Freitas-Magalhães, A. (2018). Facial Action Coding System 3.0: Manual of Scientific Codification of the Human Face (english edition). Porto: FEELab Science Books. {{ISBN|978-989-8766-89-2}}</ref>
|The F-M FACS 3.0 features 8 pioneering Action Units (AUs) and 22 pioneering Tongue Movements (TMs), in addition to functional and structural nomenclature;
3D technology and automatic and real-time recognition;
neutral, sadness, surprise, happiness, fear, anger, contempt and disgust
|10&nbsp;
|4877 videos and images sequences
|Color
|3D 4K
|Facial expression labels and (AU intensity for each video frame)
|Posed and Spontaneous
|-
|Extended Cohn-Kanade Dataset (CK+)<ref>P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews, "The Extended Cohn-Kanade Dataset (CK+): A complete facial expression dataset for action unit and emotion-specified expression," in ''3rd IEEE Workshop on CVPR for Human Communicative Behavior Analysis'', 2010</ref> [http://www.consortium.ri.cmu.edu/ckagree/ download]