Deep cross-modal correlation learning for audio and lyrics in music retrieval
Deep cross-modal learning has successfully demonstrated excellent performance in cross-
modal multimedia retrieval, with the aim of learning joint representations between different …
modal multimedia retrieval, with the aim of learning joint representations between different …
Support the underground: characteristics of beyond-mainstream music listeners
Music recommender systems have become an integral part of music streaming services
such as Spotify and Last. fm to assist users navigating the extensive music collections …
such as Spotify and Last. fm to assist users navigating the extensive music collections …
Music emotion recognition: From content-to context-based models
The striking ability of music to elicit emotions assures its prominent status in human culture
and every day life. Music is often enjoyed and sought for its ability to induce or convey …
and every day life. Music is often enjoyed and sought for its ability to induce or convey …
Multi-modal music emotion recognition: A new dataset, methodology and comparative analysis
We propose a multi-modal approach to the music emotion recognition (MER) problem,
combining information from distinct sources, namely audio, MIDI and lyrics. We introduce a …
combining information from distinct sources, namely audio, MIDI and lyrics. We introduce a …
A framework for evaluating multimodal music mood classification
This research proposes a framework for music mood classification that uses multiple and
complementary information sources, namely, music audio, lyric text, and social tags …
complementary information sources, namely, music audio, lyric text, and social tags …
[HTML][HTML] User models for culture-aware music recommendation: fusing acoustic and cultural cues
Integrating information about the listener's cultural background when building music
recommender systems has recently been identified as a means to improve recommendation …
recommender systems has recently been identified as a means to improve recommendation …
The Moodo dataset: Integrating user context with emotional and color perception of music for affective music information retrieval
This paper presents a new multimodal dataset Moodo that can aid the development of
affective music information retrieval systems. Moodo's main novelties are a multimodal …
affective music information retrieval systems. Moodo's main novelties are a multimodal …
Music emotion recognition with standard and melodic audio features
We propose a novel approach to music emotion recognition by combining standard and
melodic features extracted directly from audio. To this end, a new audio dataset organized …
melodic features extracted directly from audio. To this end, a new audio dataset organized …
Using machine learning analysis to interpret the relationship between music emotion and lyric features
Melody and lyrics, reflecting two unique human cognitive abilities, are usually combined in
music to convey emotions. Although psychologists and computer scientists have made …
music to convey emotions. Although psychologists and computer scientists have made …
Cross-modal interaction via reinforcement feedback for audio-lyrics retrieval
The task of retrieving audio content relevant to lyric queries and vice versa plays a critical
role in music-oriented applications. In this process, robust feature representations have to be …
role in music-oriented applications. In this process, robust feature representations have to be …