Search results for "Auditory"
showing 10 items of 568 documents
The sound of music: differentiating musicians using a fast, musical multi-feature mismatch negativity paradigm.
2011
Abstract Musicians’ skills in auditory processing depend highly on instrument, performance practice, and on level of expertise. Yet, it is not known though whether the style/genre of music might shape auditory processing in the brains of musicians. Here, we aimed at tackling the role of musical style/genre on modulating neural and behavioral responses to changes in musical features. Using a novel, fast and musical sounding multi-feature paradigm, we measured the mismatch negativity (MMN), a pre-attentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, rock/pop) and in non-musicians. Jazz and classical musicians sco…
Infant information processing and family history of specific language impairment: converging evidence for RAP deficits from two paradigms
2007
An infant's ability to process auditory signals presented in rapid succession (i.e. rapid auditory processing abilities [RAP]) has been shown to predict differences in language outcomes in toddlers and preschool children. Early deficits in RAP abilities may serve as a behavioral marker for language-based learning disabilities. The purpose of this study is to determine if performance on infant information processing measures designed to tap RAP and global processing skills differ as a function of family history of specific language impairment (SLI) and/or the particular demand characteristics of the paradigm used. Seventeen 6- to 9-month-old infants from families with a history of specific l…
Capturing the musical brain with Lasso: Dynamic decoding of musical features from fMRI data.
2013
We investigated neural correlates of musical feature processing with a decoding approach. To this end, we used a method that combines computational extraction of musical features with regularized multiple regression (LASSO). Optimal model parameters were determined by maximizing the decoding accuracy using a leave-one-out cross-validation scheme. The method was applied to functional magnetic resonance imaging (fMRI) data that were collected using a naturalistic paradigm, in which participants' brain responses were recorded while they were continuously listening to pieces of real music. The dependent variables comprised musical feature time series that were computationally extracted from the…
Musical familiarity in congenital amusia: Evidence from a gating paradigm
2013
Congenital amusia has been described as a lifelong deficit of music perception and production, notably including amusic individuals' difficulties to recognize a familiar tune without the aid of lyrics. The present study aimed to evaluate whether amusic individuals might have acquired long-term knowledge of familiar music, and to test for the minimal amount of acoustic information necessary to access this knowledge (if any) in amusia. Segments of familiar and unfamiliar instrumental musical pieces were presented with increasing duration (250, 500, 1000 msec etc.), and participants provided familiarity judgments for each segment. Results showed that amusic individuals succeeded in differentia…
Both contextual regularity and selective attention affect the reduction of precision‐weighted prediction errors but in distinct manners
2020
Predictive coding model of perception postulates that the primary objective of the brain is to infer the causes of sensory inputs by reducing prediction errors (i.e., the discrepancy between expected and actual information). Moreover, prediction errors are weighted by their precision (i.e., inverse variance), which quantifies the degree of certainty about the variables. There is accumulating evidence that the reduction of precision-weighted prediction errors can be affected by contextual regularity (as an external factor) and selective attention (as an internal factor). However, it is unclear whether the two factors function together or separately. Here we used electroencephalography (EEG) …
From Vivaldi to Beatles and back: predicting lateralized brain responses to music.
2013
We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised …
On application of kernel PCA for generating stimulus features for fMRI during continuous music listening
2017
Abstract Background There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. New method fMRI data from naturalistic music listening experi…
Path Following in Non-Visual Conditions.
2018
Path-following tasks have been investigated mostly under visual conditions, that is when subjects are able to see both the path and the tool, or limb, used for navigation. Moreover, only basic path shapes are usually adopted. In the present experiment, participants must rely exclusively on continuous, non-speech, and ecological auditory and vibrotactile cues to follow a path on a flat surface. Two different, asymmetric path shapes were tested. Participants navigated by moving their index finger over a surface sensing position and force. Results show that the different non-visual feedback modes did not affect the task's accuracy, yet they affected its speed, with vibrotactile feedback causin…
Identifying musical pieces from fMRI data using encoding and decoding models.
2018
AbstractEncoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a poin…
Where is the beat in that note? Effects of attack, duration, and frequency on the perceived timing of musical and quasi-musical sounds
2019
The perceptual center (P-center) of a sound is typically understood as the specific moment at which it is perceived to occur. Using matched sets of real and artificial musical sounds as stimuli, we probed the influence of attack (rise time), duration, and frequency (center frequency) on perceived P-center location and P-center variability. Two different methods to determine the P-centers were used: Clicks aligned in-phase with the target sounds via the method of adjustment, and tapping in synchrony with the target sounds. Attack and duration were the primary cues for P-center location and P-center variability; P-center variability was found to be a useful measure of P-center shape. Consiste…