Avec 2016: Depression, mood, and emotion recognition workshop and challenge

M Valstar, J Gratch, B Schuller, F Ringeval… - Proceedings of the 6th …, 2016 - dl.acm.org
The Audio/Visual Emotion Challenge and Workshop (AVEC 2016)" Depression, Mood and
Emotion" will be the sixth competition event aimed at comparison of multimedia processing …

Facial expression recognition in video with multiple feature fusion

J Chen, Z Chen, Z Chi, H Fu - IEEE Transactions on Affective …, 2016 - ieeexplore.ieee.org
Video based facial expression recognition has been a long standing problem and attracted
growing attention recently. The key to a successful facial expression recognition system is to …

Av+ ec 2015: The first affect recognition challenge bridging across audio, video, and physiological data

F Ringeval, B Schuller, M Valstar, S Jaiswal… - Proceedings of the 5th …, 2015 - dl.acm.org
We present the first Audio-Visual+ Emotion recognition Challenge and workshop (AV+ EC
2015) aimed at comparison of multimedia processing and machine learning methods for …

Develo** crossmodal expression recognition based on a deep neural model

P Barros, S Wermter - Adaptive behavior, 2016 - journals.sagepub.com
A robot capable of understanding emotion expressions can increase its own capability of
solving problems by using emotion expressions as part of its own decision-making, in a …

Facial Affect``In-The-Wild

S Zafeiriou, A Papaioannou, I Kotsia… - Proceedings of the …, 2016 - cv-foundation.org
Well-established benchmarks have been developed in the past 20 years for automatic facial
behaviour analysis. Nevertheless, for some important problems regarding analysis of facial …

Audio and face video emotion recognition in the wild using deep neural networks and small datasets

W Ding, M Xu, D Huang, W Lin, M Dong, X Yu… - Proceedings of the 18th …, 2016 - dl.acm.org
This paper presents the techniques used in our contribution to Emotion Recognition in the
Wild 2016's video based sub-challenge. The purpose of the sub-challenge is to classify the …

[HTML][HTML] Audio–visual domain adaptation using conditional semi-supervised generative adversarial networks

C Athanasiadis, E Hortal, S Asteriadis - Neurocomputing, 2020 - Elsevier
Accessing large, manually annotated audio databases in an effort to create robust models
for emotion recognition is a notably difficult task, handicapped by the annotation cost and …

[PDF][PDF] Facial affect “in-the-wild”: A survey and a new database

S Zafeiriou, A Papaioannou, I Kotsia… - … Vision and Pattern …, 2016 - openaccess.thecvf.com
Well-established databases and benchmarks have been developed in the past 20 years for
automatic facial behaviour analysis. Nevertheless, for some important problems regarding …

Enrollment-less training for personalized voice activity detection

N Makishima, M Ihori, T Tanaka, A Takashima… - arxiv preprint arxiv …, 2021 - arxiv.org
We present a novel personalized voice activity detection (PVAD) learning method that does
not require enrollment data during training. PVAD is a task to detect the speech segments of …

Towards cross-lingual automatic diagnosis of autism spectrum condition in children's voices

M Schmitt, E Marchi, F Ringeval… - … ; 12. ITG Symposium, 2016 - ieeexplore.ieee.org
Automatic diagnosis of Autism Spectrum Conditions (ASC) from the voice is still in its
infancy. The comparably few studies up to now focus mostly on the relevance of acoustic …