Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects
S Zhang, Y Yang, C Chen, X Zhang, Q Leng… - Expert Systems with …, 2024 - Elsevier
Emotion recognition has recently attracted extensive interest due to its significant
applications to human–computer interaction. The expression of human emotion depends on …
applications to human–computer interaction. The expression of human emotion depends on …
Large language models meet text-centric multimodal sentiment analysis: A survey
Compared to traditional sentiment analysis, which only considers text, multimodal sentiment
analysis needs to consider emotional signals from multimodal sources simultaneously and …
analysis needs to consider emotional signals from multimodal sources simultaneously and …
NusaCrowd: Open source initiative for Indonesian NLP resources
We present NusaCrowd, a collaborative initiative to collect and unify existing resources for
Indonesian languages, including opening access to previously non-public resources …
Indonesian languages, including opening access to previously non-public resources …
Negative object presence evaluation (nope) to measure object hallucination in vision-language models
Object hallucination poses a significant challenge in vision-language (VL) models, often
leading to the generation of nonsensical or unfaithful responses with non-existent objects …
leading to the generation of nonsensical or unfaithful responses with non-existent objects …
One country, 700+ languages: NLP challenges for underrepresented languages and dialects in Indonesia
NLP research is impeded by a lack of resources and awareness of the challenges presented
by underrepresented languages and dialects. Focusing on the languages spoken in …
by underrepresented languages and dialects. Focusing on the languages spoken in …
Multimodal emotion detection via attention-based fusion of extracted facial and speech features
Methods for detecting emotions that employ many modalities at the same time have been
found to be more accurate and resilient than those that rely on a single sense. This is due to …
found to be more accurate and resilient than those that rely on a single sense. This is due to …
A facial expression-aware multimodal multi-task learning framework for emotion recognition in multi-party conversations
Abstract Multimodal Emotion Recognition in Multiparty Conversations (MERMC) has
recently attracted considerable attention. Due to the complexity of visual scenes in multi …
recently attracted considerable attention. Due to the complexity of visual scenes in multi …
M-SENA: An integrated platform for multimodal sentiment analysis
M-SENA is an open-sourced platform for Multimodal Sentiment Analysis. It aims to facilitate
advanced research by providing flexible toolkits, reliable benchmarks, and intuitive …
advanced research by providing flexible toolkits, reliable benchmarks, and intuitive …
Multi-label multimodal emotion recognition with transformer-based fusion and emotion-level representation learning
Emotion recognition has been an active research area for a long time. Recently, multimodal
emotion recognition from video data has grown in importance with the explosion of video …
emotion recognition from video data has grown in importance with the explosion of video …
Vision guided generative pre-trained language models for multimodal abstractive summarization
Multimodal abstractive summarization (MAS) models that summarize videos (vision
modality) and their corresponding transcripts (text modality) are able to extract the essential …
modality) and their corresponding transcripts (text modality) are able to extract the essential …