0000000000420768
AUTHOR
Pasi Saari
Feature selection for classification of music according to expressed emotion
Generalizability and Simplicity as Criteria in Feature Selection: Application to Mood Classification in Music
Classification of musical audio signals according to expressed mood or emotion has evident applications to content-based music retrieval in large databases. Wrapper selection is a dimension reduction method that has been proposed for improving classification performance. However, the technique is prone to lead to overfitting of the training data, which decreases the generalizability of the obtained results. We claim that previous attempts to apply wrapper selection in the field of music information retrieval (MIR) have led to disputable conclusions about the used methods due to inadequate analysis frameworks, indicative of overfitting, and biased results. This paper presents a framework bas…
Music mood annotation using semantic computing and machine learning
What makes music memorable? : Relationships between acoustic musical features and music-evoked emotions and memories in older adults
Publisher: Public Library of Science; International audience; Music has a unique capacity to evoke both strong emotions and vivid autobiographical memories. Previous music information retrieval (MIR) studies have shown that the emotional experience of music is influenced by a combination of musical features, including tonal, rhythmic, and loudness features. Here, our aim was to explore the relationship between music-evoked emotions and music-evoked memories and how musical features (derived with MIR) can predict them both. Methods Healthy older adults (N = 113, age ≥ 60 years) participated in a listening task in which they rated a total of 140 song excerpts comprising folk songs and popular…
Semantic models of musical mood: Comparison between crowd-sourced and curated editorial tags
Social media services such as Last.fm provide crowd-sourced mood tags which are a rich but often noisy source of information. In contrast, editorial annotations from production music libraries are meant to be incisive in nature. We compare the efficiency of these two data sources in capturing semantic information on mood expressed by music. First, a semantic computing technique devised for mood-related tags in large datasets is applied to Last.fm and I Like Music (ILM) corpora separately (250,000 tracks each). The resulting semantic estimates are then correlated with listener ratings of arousal, valence and tension. High correlations (Spearman's rho) are found between the track positions in…
Personality and musical preference using social-tagging in excerpt-selection.
Music preference has been related to individual differences like social identity, cognitive style, and personality, but quantifying music preference can be a challenge. Self-report measures may be too presumptive of shared genre definitions between listeners, while listener ratings of expert-selected music may fail to reflect typical listeners’ genre boundaries. The current study aims to address this by using a social-tagging approach to select music for studying preference. In this study, 2,407 tracks were collected and subsampled from the Last.fm social-tagging service and the EchoNest platform based on attributes such as genre, tempo, and danceability. The set was further subsampled acco…
Semantic Computing of Moods Based on Tags in Social Media of Music
Social tags inherent in online music services such as Last.fm provide a rich source of information on musical moods. The abundance of social tags makes this data highly beneficial for developing techniques to manage and retrieve mood information, and enables study of the relationships between music content and mood representations with data substantially larger than that available for conventional emotion research. However, no systematic assessment has been done on the accuracy of social tags and derived semantic models at capturing mood information in music. We propose a novel technique called Affective Circumplex Transformation (ACT) for representing the moods of music tracks in an interp…
Genre-adaptive Semantic Computing and Audio-based Modelling for Music Mood Annotation
This study investigates whether taking genre into account is beneficial for automatic music mood annotation in terms of core affects valence, arousal, and tension, as well as several other mood scales. Novel techniques employing genre-adaptive semantic computing and audio-based modelling are proposed. A technique called the ACTwg employs genre-adaptive semantic computing of mood-related social tags, whereas ACTwg-SLPwg combines semantic computing and audio-based modelling, both in a genre-adaptive manner. The proposed techniques are experimentally evaluated at predicting listener ratings related to a set of 600 popular music tracks spanning multiple genres. The results show that ACTwg outpe…
Dance to your own drum: Identification of musical genre and individual dancer from motion capture using machine learning
Machine learning has been used to accurately classify musical genre using features derived from audio signals. Musical genre, as well as lower-level audio features of music, have also been shown to...
Decoding Musical Training from Dynamic Processing of Musical Features in the Brain
AbstractPattern recognition on neural activations from naturalistic music listening has been successful at predicting neural responses of listeners from musical features, and vice versa. Inter-subject differences in the decoding accuracies have arisen partly from musical training that has widely recognized structural and functional effects on the brain. We propose and evaluate a decoding approach aimed at predicting the musicianship class of an individual listener from dynamic neural processing of musical features. Whole brain functional magnetic resonance imaging (fMRI) data was acquired from musicians and nonmusicians during listening of three musical pieces from different genres. Six mus…