6533b839fe1ef96bd12a6618

RESEARCH PRODUCT

Identifying musical pieces from fMRI data using encoding and decoding models.

Petri ToiviainenAnnerose EngelAnnerose EngelMauricio CagyRodrigo BasilioSebastian HoefleJorge MollVinoo AlluriVinoo Alluri

subject

AdultMaleComputer scienceSpeech recognitionModels Neurologicalmusiikkilcsh:MedicineMusicalStimulus (physiology)Auditory cortexneural encodingkuunteleminen050105 experimental psychologyArticleKey (music)03 medical and health sciencesYoung Adult0302 clinical medicineSpatio-Temporal AnalysisEncoding (memory)Humans0501 psychology and cognitive scienceslcsh:ScienceAuditory CortexMultidisciplinaryPoint (typography)lcsh:R05 social sciencesneurotieteetMagnetic Resonance Imagingneural decodingHealthy VolunteerscortexaivokuorikoneoppiminenAcoustic StimulationDuration (music)lcsh:QFemale030217 neurology & neurosurgeryDecoding methodsMusic

description

AbstractEncoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

10.1038/s41598-018-20732-3https://pubmed.ncbi.nlm.nih.gov/29396524