0000000000791695

AUTHOR

Pasi Saari

showing 10 related works from this author

Feature selection for classification of music according to expressed emotion

2009

ominaisuudetfeature selectionoverfittingtunteetmusiikkimusical emotionswrapper selectioncross-indexingmusical featuresluokitus
researchProduct

Generalizability and Simplicity as Criteria in Feature Selection: Application to Mood Classification in Music

2011

Classification of musical audio signals according to expressed mood or emotion has evident applications to content-based music retrieval in large databases. Wrapper selection is a dimension reduction method that has been proposed for improving classification performance. However, the technique is prone to lead to overfitting of the training data, which decreases the generalizability of the obtained results. We claim that previous attempts to apply wrapper selection in the field of music information retrieval (MIR) have led to disputable conclusions about the used methods due to inadequate analysis frameworks, indicative of overfitting, and biased results. This paper presents a framework bas…

ta113Acoustics and UltrasonicsComputer sciencebusiness.industryDimensionality reductionEmotion classificationFeature selectionOverfittingMachine learningcomputer.software_genreNaive Bayes classifierFeature (machine learning)Music information retrievalGeneralizability theoryArtificial intelligenceElectrical and Electronic EngineeringbusinesscomputerIEEE Transactions on Audio, Speech, and Language Processing
researchProduct

Semantic models of musical mood: Comparison between crowd-sourced and curated editorial tags

2013

Social media services such as Last.fm provide crowd-sourced mood tags which are a rich but often noisy source of information. In contrast, editorial annotations from production music libraries are meant to be incisive in nature. We compare the efficiency of these two data sources in capturing semantic information on mood expressed by music. First, a semantic computing technique devised for mood-related tags in large datasets is applied to Last.fm and I Like Music (ILM) corpora separately (250,000 tracks each). The resulting semantic estimates are then correlated with listener ratings of arousal, valence and tension. High correlations (Spearman's rho) are found between the track positions in…

Computer sciencebusiness.industryBehavioural sciencesMusicalcomputer.software_genreWorld Wide WebMoodSemantic computingta6131Social mediaArtificial intelligenceValence (psychology)businessSemantic WebcomputerNatural language processing2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)
researchProduct

Personality and musical preference using social-tagging in excerpt-selection.

2017

Music preference has been related to individual differences like social identity, cognitive style, and personality, but quantifying music preference can be a challenge. Self-report measures may be too presumptive of shared genre definitions between listeners, while listener ratings of expert-selected music may fail to reflect typical listeners’ genre boundaries. The current study aims to address this by using a social-tagging approach to select music for studying preference. In this study, 2,407 tracks were collected and subsampled from the Last.fm social-tagging service and the EchoNest platform based on attributes such as genre, tempo, and danceability. The set was further subsampled acco…

media_common.quotation_subjectmusiikkisosiaalinen mediaEmpathyMusical050105 experimental psychology060404 musicempatiaPersonality0501 psychology and cognitive sciencessocial taggingempathyta515Selection (genetic algorithm)media_commonmusic preference05 social sciences06 humanities and the artsGeneral MedicinepersoonallisuusPreferencepersonalityta6131musiikkimakuPsychologySocial psychology0604 artsPsychomusicology: Music, Mind, and Brain
researchProduct

Semantic Computing of Moods Based on Tags in Social Media of Music

2014

Social tags inherent in online music services such as Last.fm provide a rich source of information on musical moods. The abundance of social tags makes this data highly beneficial for developing techniques to manage and retrieve mood information, and enables study of the relationships between music content and mood representations with data substantially larger than that available for conventional emotion research. However, no systematic assessment has been done on the accuracy of social tags and derived semantic models at capturing mood information in music. We propose a novel technique called Affective Circumplex Transformation (ACT) for representing the moods of music tracks in an interp…

FOS: Computer and information sciencesVocabularyComputer scienceMusic information retrievalmedia_common.quotation_subjectSemantic analysis (machine learning)Moodscomputer.software_genreAffect (psychology)SemanticsComputer Science - Information RetrievalSemantic computingMusic information retrievalAffective computingmedia_commonSocial and Information Networks (cs.SI)ta113Probabilistic latent semantic analysisSocial tagsbusiness.industryComputer Science - Social and Information NetworksMultimedia (cs.MM)Semantic analysisComputer Science ApplicationsMoodComputational Theory and MathematicsWeb miningta6131Vector space modelArtificial intelligenceGenresbusinesscomputerComputer Science - MultimediaInformation Retrieval (cs.IR)MusicNatural language processingPrediction.Information SystemsIEEE Transactions on Knowledge and Data Engineering
researchProduct

Genre-adaptive Semantic Computing and Audio-based Modelling for Music Mood Annotation

2016

This study investigates whether taking genre into account is beneficial for automatic music mood annotation in terms of core affects valence, arousal, and tension, as well as several other mood scales. Novel techniques employing genre-adaptive semantic computing and audio-based modelling are proposed. A technique called the ACTwg employs genre-adaptive semantic computing of mood-related social tags, whereas ACTwg-SLPwg combines semantic computing and audio-based modelling, both in a genre-adaptive manner. The proposed techniques are experimentally evaluated at predicting listener ratings related to a set of 600 popular music tracks spanning multiple genres. The results show that ACTwg outpe…

ExploitMusic information retrievalmusic information retrievalcomputer.software_genre050105 experimental psychologyGenre-adaptive.030507 speech-language pathology & audiology03 medical and health sciencesAnnotationPopular musicSemantic computingMusic information retrieval0501 psychology and cognitive sciencesValence (psychology)genre-adaptivesocial tagsta113music genrebusiness.industry05 social sciencesComputingMilieux_PERSONALCOMPUTINGmood predictionMusic moodHuman-Computer InteractionMoodta6131semantic computingArtificial intelligence0305 other medical sciencebusinessPsychologycomputerSoftwareNatural language processing
researchProduct

Dance to your own drum: Identification of musical genre and individual dancer from motion capture using machine learning

2020

Machine learning has been used to accurately classify musical genre using features derived from audio signals. Musical genre, as well as lower-level audio features of music, have also been shown to...

Audio signalVisual Arts and Performing ArtsDanceInformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.HCI)Computer sciencebusiness.industry05 social sciencesComputingMilieux_PERSONALCOMPUTING06 humanities and the artsDrumMusicalMachine learningcomputer.software_genreMotion capture050105 experimental psychology060404 musicIdentification (information)Embodied cognition0501 psychology and cognitive sciencesArtificial intelligencebusinesscomputer0604 artsMusicJournal of New Music Research
researchProduct

Decoding Musical Training from Dynamic Processing of Musical Features in the Brain

2018

AbstractPattern recognition on neural activations from naturalistic music listening has been successful at predicting neural responses of listeners from musical features, and vice versa. Inter-subject differences in the decoding accuracies have arisen partly from musical training that has widely recognized structural and functional effects on the brain. We propose and evaluate a decoding approach aimed at predicting the musicianship class of an individual listener from dynamic neural processing of musical features. Whole brain functional magnetic resonance imaging (fMRI) data was acquired from musicians and nonmusicians during listening of three musical pieces from different genres. Six mus…

AdultMaleoppiminenSpeech recognitionlcsh:MedicineMusical050105 experimental psychologykuunteleminenArticle03 medical and health sciencesYoung Adult0302 clinical medicinemusiikintutkimusalgoritmitmedicineFeature (machine learning)Journal ArticleharjoitteluHumans0501 psychology and cognitive sciencesActive listeningTonalitylcsh:Sciencelearning algorithmsBrain MappingMultidisciplinarymedicine.diagnostic_testMusic psychology05 social scienceslcsh:RBrainMagnetic Resonance Imagingneural decodingAcoustic StimulationPattern recognition (psychology)Auditory Perceptionlcsh:QFemaleFunctional magnetic resonance imagingPsychologyaivotTimbre030217 neurology & neurosurgeryMusic
researchProduct

Music mood annotation using semantic computing and machine learning

2015

mallintaminentägitmusic emotion recognitionverkkoyhteisötmusiikkiannotointisosiaalinen mediamusic mood annotationfeature selectionkoneoppiminentunteeteditorial tagssemantic computingaudio feature extractiondigitaalinen musiikkigenre-adaptivesocial tagslaskentamenetelmätcircumplex model
researchProduct

What makes music memorable? : Relationships between acoustic musical features and music-evoked emotions and memories in older adults

2021

Publisher: Public Library of Science; International audience; Music has a unique capacity to evoke both strong emotions and vivid autobiographical memories. Previous music information retrieval (MIR) studies have shown that the emotional experience of music is influenced by a combination of musical features, including tonal, rhythmic, and loudness features. Here, our aim was to explore the relationship between music-evoked emotions and music-evoked memories and how musical features (derived with MIR) can predict them both. Methods Healthy older adults (N = 113, age ≥ 60 years) participated in a listening task in which they rated a total of 140 song excerpts comprising folk songs and popular…

Malemusic perceptionEntropyEmotionsSocial Sciencesmusiikkipsykologiaregression analysismemoryCognitionLearning and MemoryMathematical and Statistical TechniquesElderlyPsychologymusic cognitionAged 80 and overmuistotPhysicsStatisticsQRMiddle AgedhumanitiesPhysical Sciences[SCCO.PSYC]Cognitive science/PsychologyThermodynamicsMedicineSensory PerceptionFemaleMusic perceptionRegression analysisikääntyneetResearch Article515 PsychologyMemory EpisodicSciencemusiikkiResearch and Analysis Methodsemotionsbehavioral disciplines and activitieselderlybioacousticsMemoryMusic cognitiontunteetAdultsHumansStatistical MethodsAgedmuisti (kognitio)Cognitive PsychologyBiology and Life SciencesAcousticsAcoustic StimulationAge GroupsPeople and PlacesMental RecallCognitive SciencePerceptionPopulation GroupingsentropyBioacousticshuman activitiesMathematicsMusicNeuroscience
researchProduct