Misa: Modality-invariant and-specific representations for multimodal sentiment analysis

D Hazarika, R Zimmermann, S Poria - Proceedings of the 28th ACM …, 2020 - dl.acm.org
Multimodal Sentiment Analysis is an active area of research that leverages multimodal
signals for affective understanding of user-generated videos. The predominant approach …

Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph

AAB Zadeh, PP Liang, S Poria, E Cambria… - Proceedings of the …, 2018 - aclanthology.org
Analyzing human multimodal language is an emerging area of research in NLP. Intrinsically
this language is multimodal (heterogeneous), sequential and asynchronous; it consists of …

Multibench: Multiscale benchmarks for multimodal representation learning

PP Liang, Y Lyu, X Fan, Z Wu, Y Cheng… - Advances in neural …, 2021 - pmc.ncbi.nlm.nih.gov
Learning multimodal representations involves integrating information from multiple
heterogeneous sources of data. It is a challenging yet crucial area with numerous real-world …

Tensor fusion network for multimodal sentiment analysis

A Zadeh, M Chen, S Poria, E Cambria… - ar** automatic methods to detect Parkinson's disease (PD) from speech has
attracted increasing interest as these techniques can potentially be used in telemonitoring …

Social-iq: A question answering benchmark for artificial social intelligence

A Zadeh, M Chan, PP Liang, E Tong… - Proceedings of the …, 2019 - openaccess.thecvf.com
As intelligent systems increasingly blend into our everyday life, artificial social intelligence
becomes a prominent area of research. Intelligent systems must be socially intelligent in …