A review of key technologies for emotion analysis using multimodal information
X Zhu, C Guo, H Feng, Y Huang, Y Feng, X Wang… - Cognitive …, 2024 - Springer
Emotion analysis, an integral aspect of human–machine interactions, has witnessed
significant advancements in recent years. With the rise of multimodal data sources such as …
significant advancements in recent years. With the rise of multimodal data sources such as …
Panosent: A panoptic sextuple extraction benchmark for multimodal conversational aspect-based sentiment analysis
While existing Aspect-based Sentiment Analysis (ABSA) has received extensive effort and
advancement, there are still gaps in defining a more holistic research target seamlessly …
advancement, there are still gaps in defining a more holistic research target seamlessly …
SDR-GNN: Spectral Domain Reconstruction Graph Neural Network for incomplete multimodal learning in conversational emotion recognition
Abstract Multimodal Emotion Recognition in Conversations (MERC) aims to classify
utterance emotions using textual, auditory, and visual modal features. Most existing MERC …
utterance emotions using textual, auditory, and visual modal features. Most existing MERC …
Recent trends of multimodal affective computing: A survey from NLP perspective
Multimodal affective computing (MAC) has garnered increasing attention due to its broad
applications in analyzing human behaviors and intentions, especially in text-dominated …
applications in analyzing human behaviors and intentions, especially in text-dominated …
Conversation understanding using relational temporal graph neural networks with auxiliary cross-modality interaction
Emotion recognition is a crucial task for human conversation understanding. It becomes
more challenging with the notion of multimodal data, eg, language, voice, and facial …
more challenging with the notion of multimodal data, eg, language, voice, and facial …
Multimodal emotion-cause pair extraction with holistic interaction and label constraint
The multimodal emotion-cause pair extraction (MECPE) task aims to detect the emotions,
causes, and emotion-cause pairs from multimodal conversations. Existing methods for this …
causes, and emotion-cause pairs from multimodal conversations. Existing methods for this …
FedMBridge: bridgeable multimodal federated learning
Multimodal Federated Learning (MFL) addresses the setup of multiple clients with diversified
modality types (eg image, text, video, and audio) working together to improve their local …
modality types (eg image, text, video, and audio) working together to improve their local …
A review of the emotion recognition model of robots
M Zhao, L Gong, AS Din - Applied Intelligence, 2025 - Springer
Being able to experience emotions is a defining characteristic of machine intelligence, and
the first step in giving robots emotions is to enable them to accurately recognize and …
the first step in giving robots emotions is to enable them to accurately recognize and …
Mamba-Enhanced Text-Audio-Video Alignment Network for Emotion Recognition in Conversations
Emotion Recognition in Conversations (ERCs) is a vital area within multimodal interaction
research, dedicated to accurately identifying and classifying the emotions expressed by …
research, dedicated to accurately identifying and classifying the emotions expressed by …
Dynamic Emotion-Dependent Network with Relational Subgraph Interaction for Multimodal Emotion Recognition
Multimodal Emotion Recognition in Conversations (MERC) is an important topic in human-
computer interaction. In the MERC task, conversations exhibit dynamic emotional …
computer interaction. In the MERC task, conversations exhibit dynamic emotional …