6533b831fe1ef96bd1298846

RESEARCH PRODUCT

Multi-Scale Modelling of Segmentation : Effect of Music Training and Experimental Task

Martin HartmannOlivier LartillotPetri Toiviainen

subject

segmentation modellingmusic segmentationsegmentation taskmusic trainingmusical features

description

While listening to music, people, often unwittingly, break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects of experimental task (i.e. realtime vs annotated segmentation), nor of musicianship on boundary perception are clear. Our study assesses musicianship effects and differences between segmentation tasks. We conducted a real-time task experiment to collect segmentations by musicians and non-musicians from 9 musical pieces; in a second experiment on non-realtime segmentation, musicians indicated boundaries and their strength for 6 examples. Kernel density estimation was used to develop multiscale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary indication density, although this might be contingent on stimuli and other factors. In line with other studies, no musicianship effects were found: our results showed high agreement between groups and similar inter-subject correlations. Also consistent with previous work, time scales between and 1 and 2 seconds were optimal for combining boundary indications. In addition, we found effects of task on number of indications, and a time lag between tasks dependent on beat length. Also, the optimal time scale for combining responses increased when the pulse clarity or event density decreased. Implications for future segmentation studies are raised concerning the selection of time scales for modelling boundary density, and time alignment between models. peerReviewed

http://urn.fi/URN:NBN:fi:jyu-201612195166