Multi-scale dynamic and hierarchical relationship modeling for facial action units recognition

Z Wang, S Song, C Luo, S Deng… - Proceedings of the …, 2024 - openaccess.thecvf.com
Human facial action units (AUs) are mutually related in a hierarchical manner as not only
they are associated with each other in both spatial and temporal domains but also AUs …

Weakly-supervised text-driven contrastive learning for facial behavior understanding

X Zhang, T Wang, X Li, H Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Contrastive learning has shown promising potential for learning robust representations by
utilizing unlabeled data. However, constructing effective positive-negative pairs for …

A joint local spatial and global temporal CNN-Transformer for dynamic facial expression recognition

L Wang, X Kang, F Ding, S Nakagawa, F Ren - Applied Soft Computing, 2024 - Elsevier
Unlike conventional video action recognition, Dynamic Facial Expression Recognition
(DFER) tasks exhibit minimal spatial movement of objects. Addressing this distinctive …

Toward robust facial action units' detection

J Yang, Y Hristov, J Shen, Y Lin… - Proceedings of the …, 2023 - ieeexplore.ieee.org
Facial action unit (AU) detection plays an important role in performing facial behavioral
analysis of raw video inputs. Overall, there are three key factors that contribute toward the …

Knowledge-spreader: Learning semi-supervised facial action dynamics by consistifying knowledge granularity

X Li, X Zhang, T Wang, L Yin - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Recent studies on dynamic facial action unit (AU) detection have extensively relied on
dense annotations. However, manual annotations are difficult, time-consuming, and costly …

Reactionet: Learning high-order facial behavior from universal stimulus-reaction by dyadic relation reasoning

X Li, T Wang, G Zhao, X Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Diverse visual stimuli can evoke various human affective states, which are usually
manifested in an individual's muscular actions and facial expressions. In lab-controlled …

Multimodal channel-mixing: Channel and spatial masked autoencoder on facial action unit detection

X Zhang, H Yang, T Wang, X Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Recent studies have focused on utilizing multi-modal data to develop robust models for
facial Action Unit (AU) detection. However, the heterogeneity of multi-modal data poses …

Disagreement matters: Exploring internal diversification for redundant attention in generic facial action analysis

X Li, Z Zhang, X Zhang, T Wang, Z Li… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
This paper demonstrates the effectiveness of a diversification mechanism for building a
more robust multi-attention system in generic facial action analysis. While previous multi …

Self Decoupling-Reconstruction Network for Facial Expression Recognition

L Wang, X Kang, F Ding, HT Yu, Y Wu… - … Joint Conference on …, 2024 - ieeexplore.ieee.org
Facial Expression Recognition (FER) poses significant challenges due to various imaging
conditions, including diverse head poses, lighting conditions, resolutions, and occlusions …

Knowledge-spreader: Learning facial action unit dynamics with extremely limited labels

X Li, X Zhang, T Wang, L Yin - arxiv preprint arxiv:2203.16678, 2022 - arxiv.org
Recent studies on the automatic detection of facial action unit (AU) have extensively relied
on large-sized annotations. However, manually AU labeling is difficult, time-consuming, and …