en
Proceedings chapter
Open access
English

Large-scale Affective Content Analysis: Combining Media Content Features and Facial Reactions

Presented at Washington (DC, USA)
PublisherIEEE
Publication date2017
Abstract

We present a novel multimodal fusion model for affective content analysis, combining visual, audio and deep visual-sentiment descriptors from the media content with automated facial action measurements from naturalistic responses to the media. We collected a dataset of 48,867 facial responses to 384 media clips and extracted a rich feature set from the facial responses and media content. The stimulus videos were validated to be informative, inspiring, persuasive, sentimental or amusing. By combining the features, we were able to obtain a classification accuracy of 63% (weighted F1-score: 0.62) for a five-class task. This was a significant improvement over using the media content features alone. By analyzing the feature sets independently, we found that states of informed and persuaded were difficult to differentiate from facial responses alone due to the presence of similar sets of action units in each state (AU 2 occurring frequently in both cases). Facial actions were beneficial in differentiating between amused and informed states whereas media content features alone performed less well due to similarities in the visual and audio make up of the content. We highlight examples of content and reactions from each class. This is the first affective content analysis based on reactions of 10,000s of people.

Keywords
  • Large-scale affective content analysis
  • Media content features
  • Facial reactions
  • Multimodal fusion model
  • Deep visual-sentiment descriptors
  • Automated facial action measurements
  • Media clips
  • Facial responses
  • AU-2
Funding
  • Swiss National Science Foundation - Ambizione
Citation (ISO format)
MCDUFF, Daniel, SOLEYMANI, Mohammad. Large-scale Affective Content Analysis: Combining Media Content Features and Facial Reactions. In: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). Washington (DC, USA). [s.l.] : IEEE, 2017. p. 339–345. doi: 10.1109/FG.2017.49
Main files (1)
Proceedings chapter (Published version)
accessLevelPublic
Identifiers
ISBN978-1-5090-4023-0
434views
374downloads

Technical informations

Creation11/02/2017 2:13:00 PM
First validation11/02/2017 2:13:00 PM
Update time03/15/2023 2:15:33 AM
Status update03/15/2023 2:15:33 AM
Last indexation01/17/2024 1:11:45 AM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack