Deep cross-modal correlation learning for audio and lyrics in music retrieval

Y Yu, S Tang, F Raposo, L Chen - ACM Transactions on Multimedia …, 2019 - dl.acm.org
Deep cross-modal learning has successfully demonstrated excellent performance in cross-
modal multimedia retrieval, with the aim of learning joint representations between different …

Support the underground: characteristics of beyond-mainstream music listeners

D Kowald, P Muellner, E Zangerle, C Bauer… - EPJ Data …, 2021 - epjds.epj.org
Music recommender systems have become an integral part of music streaming services
such as Spotify and Last. fm to assist users navigating the extensive music collections …

Music emotion recognition: From content-to context-based models

M Barthet, G Fazekas, M Sandler - From Sounds to Music and Emotions …, 2013 - Springer
The striking ability of music to elicit emotions assures its prominent status in human culture
and every day life. Music is often enjoyed and sought for its ability to induce or convey …

Multi-modal music emotion recognition: A new dataset, methodology and comparative analysis

RES Panda, R Malheiro, B Rocha, AP Oliveira… - … on computer music …, 2013 - baes.uc.pt
We propose a multi-modal approach to the music emotion recognition (MER) problem,
combining information from distinct sources, namely audio, MIDI and lyrics. We introduce a …

A framework for evaluating multimodal music mood classification

X Hu, K Choi, JS Downie - Journal of the Association for …, 2017 - Wiley Online Library
This research proposes a framework for music mood classification that uses multiple and
complementary information sources, namely, music audio, lyric text, and social tags …

[HTML][HTML] User models for culture-aware music recommendation: fusing acoustic and cultural cues

E Zangerle, M Pichl, M Schedl - Transactions of the …, 2020 - transactions.ismir.net
Integrating information about the listener's cultural background when building music
recommender systems has recently been identified as a means to improve recommendation …

The Moodo dataset: Integrating user context with emotional and color perception of music for affective music information retrieval

M Pesek, G Strle, A Kavčič, M Marolt - Journal of New Music …, 2017 - Taylor & Francis
This paper presents a new multimodal dataset Moodo that can aid the development of
affective music information retrieval systems. Moodo's main novelties are a multimodal …

Music emotion recognition with standard and melodic audio features

R Panda, B Rocha, RP Paiva - Applied Artificial Intelligence, 2015 - Taylor & Francis
We propose a novel approach to music emotion recognition by combining standard and
melodic features extracted directly from audio. To this end, a new audio dataset organized …

Using machine learning analysis to interpret the relationship between music emotion and lyric features

L Xu, Z Sun, X Wen, Z Huang, C Chao, L Xu - PeerJ Computer Science, 2021 - peerj.com
Melody and lyrics, reflecting two unique human cognitive abilities, are usually combined in
music to convey emotions. Although psychologists and computer scientists have made …

Cross-modal interaction via reinforcement feedback for audio-lyrics retrieval

D Zhou, F Lei, L Li, Y Zhou… - IEEE/ACM Transactions on …, 2024 - ieeexplore.ieee.org
The task of retrieving audio content relevant to lyric queries and vice versa plays a critical
role in music-oriented applications. In this process, robust feature representations have to be …