Search results for "SPEECH PERCEPTION"

showing 10 items of 136 documents

ERP denoising in multichannel EEG data using contrasts between signal and noise subspaces

2009

Abstract In this paper, a new method intended for ERP denoising in multichannel EEG data is discussed. The denoising is done by separating ERP/noise subspaces in multidimensional EEG data by a linear transformation and the following dimension reduction by ignoring noise components during inverse transformation. The separation matrix is found based on the assumption that ERP sources are deterministic for all repetitions of the same type of stimulus within the experiment, while the other noise sources do not obey the determinancy property. A detailed derivation of the technique is given together with the analysis of the results of its application to a real high-density EEG data set. The inter…

Underdetermined systemNoise reductionInverseElectroencephalographyDyslexiaEvent-related potentialmedicineHumansChildEvoked PotentialsMathematicsLanguage Testsmedicine.diagnostic_testbusiness.industryGeneral NeuroscienceDimensionality reductionBrainElectroencephalographySignal Processing Computer-AssistedPattern recognitionLinear subspaceLinear mapAcoustic StimulationData Interpretation StatisticalLinear ModelsSpeech PerceptionArtificial intelligenceArtifactsbusinessAlgorithmsSoftwareJournal of Neuroscience Methods
researchProduct

The effect of harmonic context on phoneme monitoring in vocal music

2001

The processing of a target chord depends on the previous musical context in which it has appeared. This harmonic priming effect occurs for fine syntactic-like changes in context and is observed irrespective of the extent of participants' musical expertise (Bigand & Pineau, Perception and Psychophysics, 59 (1997) 1098). The present study investigates how the harmonic context influences the processing of phonemes in vocal music. Eight-chord sequences were presented to participants. The four notes of each chord were played with synthetic phonemes and participants were required to quickly decide whether the last chord (the target) was sung on a syllable containing the phoneme /i/ or /u/. The mu…

Vocal musicLinguistics and LanguageSubdominantSpeech perceptionMusic psychologyCognitive NeuroscienceSpeech recognitionmedia_common.quotation_subjectMusical syntaxExperimental and Cognitive PsychologyLanguage and LinguisticsLinguisticsCognitionPerceptionAuditory PerceptionSpeech PerceptionDevelopmental and Educational PsychologyHumansChord (music)PsychologyPriming (psychology)Musicmedia_commonCognition
researchProduct

How do native speakers of Russian evaluate yes/no questions produced by Finnish L2 learners?

2010

This study analyzes native Russian speakers’ evaluation of seven Russian yes/noquestions each produced by Finnish speakers in two sets of recordings (during a stay in Russia and after it). The Finnish speakers were six female university students of Russian. This research question is interesting because the two typologically unrelated languages differ in the prosody of yes/no-questions. In Russian a yes/no-question is created from a lexically and syntactically corresponding statement by means of intonation, whereas in Finnish the cue for questioning is an interrogative particle – ko/-kö instead of prosody. Hence, native Finnish speakers are likely to have difficulties in pronouncing Russian …

intonationsecond languageprosodyphoneticsspeech perception
researchProduct

Discriminatory Brain Processes of Native and Foreign Language in Children with and without Reading Difficulties

2022

The association between impaired speech perception and reading difficulty has been well established in native language processing, as can be observed from brain activity. However, there has been scarce investigation of whether this association extends to brain activity during foreign language processing. The relationship between reading skills and neuronal speech representation of foreign language remains unclear. In the present study, we used event-related potentials (ERPs) with high-density EEG to investigate this question. Eleven- to 13-year-old children typically developed (CTR) or with reading difficulties (RD) were tested via a passive auditory oddball paradigm containing native (Finn…

kieli ja kielet515 Psychologykielelliset häiriötLDNäidinkielispeech perceptionoppimisvaikeudetkielellinen kehityshavainnointidysleksianative languagereading difficultiesEEGReading difficultiesSpeech perceptionForeign languageNative languageMMRforeign languagepuhe (puhuminen)lukutaito516 Educational sciencesaivotkognitiivinen kehityslukihäiriötvieraat kielet
researchProduct

Cortical correlates of language perception : neuromagnetic studies in adults and children

2007

Kielen havaitsemisen päämääränä on ymmärtää kuullun tai luetun viestin sisältö. Itse havaitseminen on meille näennäisen vaivatonta. Puheen tunnistaminen ja lukeminen ovat kuitenkin tulos monimutkaisesta aivokuorella tapahtuvasta laskennasta, jonka lähtökohtana on silmän ja korvan vastaanottama fysikaalinen signaali.Tiina Parviainen tarkasteli väitöstutkimuksessaan kielen käsittelyn aivomekanismeja normaalisti lukevilla ja lukivaikeuksisilla aikuisilla sekä lukemaan opettelevilla lapsilla. Hän selvitti kirjoitetun ja puhutun kielen käsittelyyn liittyvää aktivaatioketjua aivoissa ja siinä esiintyviä eroja aikuisten ja lasten välillä. Kielen aivomekanismien kartoituksessa on keskeistä aktivaat…

luetun ymmärtäminenneuroimagingMEGneuromagneettinen tutkimusMagnetoencephalographyspeech comprehensionreadingSpeech Perceptionkuullun ymmärtäminenkieliaivotutkimuskielipsykologiaaivotaikuisetlapset
researchProduct

Dynamics of brain activation during learning of syllable-symbol paired associations.

2019

| openaire: EC/H2020/641652/EU//ChildBrain Initial stages of reading acquisition require the learning of letter and speech sound combinations. While the long-term effects of audio-visual learning are rather well studied, relatively little is known about the short-term learning effects at the brain level. Here we examined the cortical dynamics of short-term learning using magnetoencephalography (MEG) and electroencephalography (EEG) in two experiments that respectively addressed active and passive learning of the association between shown symbols and heard syllables. In experiment 1, learning was based on feedback provided after each trial. The learning of the audio-visual associations was c…

magnetoencephalographyMalegenetic structuresBrain activity and meditationAudiologyElectroencephalographylukeminenLearning effectBehavioral Neuroscience0302 clinical medicineCortex (anatomy)EEGEvoked Potentialsta515Cerebral CortexlearningMEGmedicine.diagnostic_test05 social sciencesMagnetoencephalographyElectroencephalographyalectroencephalographymedicine.anatomical_structurePattern Recognition VisualEvoked Potentials AuditorySpeech PerceptionFemaleSyllablePsychologyAdultmedicine.medical_specialtyoppiminenCognitive NeuroscienceeducationExperimental and Cognitive Psychologyta3112050105 experimental psychology03 medical and health sciencesYoung AdultmedicineLearningHumans0501 psychology and cognitive sciencesAssociation (psychology)audiovisuaalinen aineistoAssociation LearningMagnetoencephalographyPassive learning030217 neurology & neurosurgeryaudio-visualNeuropsychologia
researchProduct

Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level.

2019

During speech perception, listeners rely on multimodal input and make use of both auditory and visual information. When presented with speech, for example syllables, the differences in brain responses to distinct stimuli are not, however, caused merely by the acoustic or visual features of the stimuli. The congruency of the auditory and visual information and the familiarity of a syllable, that is, whether it appears in the listener’s native language or not, also modulates brain responses. We investigated how the congruency and familiarity of the presented stimuli affect brain responses to audio-visual (AV) speech in 12 adult Finnish native speakers and 12 adult Chinese native speakers. The…

magnetoencephalographyfamiliarityaudio-visual integrationspeech perceptionaudio-visual stimuliNeuroscienceOriginal ResearchFrontiers in human neuroscience
researchProduct

Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level

2019

During speech perception, listeners rely on multimodal input and make use of both auditory and visual information. When presented with speech, for example syllables, the differences in brain responses to distinct stimuli are not, however, caused merely by the acoustic or visual features of the stimuli. The congruency of the auditory and visual information and the familiarity of a syllable, that is, whether it appears in the listener’s native language or not, also modulates brain responses. We investigated how the congruency and familiarity of the presented stimuli affect brain responses to audio-visual (AV) speech in 12 adult Finnish native speakers and 12 adult Chinese native speakers. The…

magnetoencephalographyfamiliarityaudio-visual integrationspeech perceptionlcsh:Neurosciences. Biological psychiatry. Neuropsychiatryaudio-visual stimulilcsh:RC321-571Frontiers in Human Neuroscience
researchProduct

Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level

2019

During speech perception, listeners rely on multimodal input and make use of both auditory and visual information. When presented with speech, for example syllables, the differences in brain responses to distinct stimuli are not, however, caused merely by the acoustic or visual features of the stimuli. The congruency of the auditory and visual information and the familiarity of a syllable, that is, whether it appears in the listener's native language or not, also modulates brain responses. We investigated how the congruency and familiarity of the presented stimuli affect brain responses to audio-visual (AV) speech in 12 adult Finnish native speakers and 12 adult Chinese native speakers. The…

magnetoencephalographyfamiliaritypuhe (puhuminen)MEGhavaitseminenaudio-visual integrationspeech perceptionaudio-visual stimuliärsykkeet
researchProduct

A role for backward transitional probabilities in word segmentation?

2008

A number of studies have shown that people exploit transitional probabilities between successive syllables to segment a stream of artificial continuous speech into words. It is often assumed that what is actually exploited are the forward transitional probabilities (given XY, the probability that X will be followed by Y ), even though the backward transitional probabilities (the probability that Y has been preceded by X) were equally informative about word structure in the languages involved in those studies. In two experiments, we showed that participants were able to learn the words from an artificial speech stream when the only available cues were the backward transitional probabilities.…

media_common.quotation_subjectSpeech recognitionExperimental and Cognitive Psychologycomputer.software_genreArts and Humanities (miscellaneous)Simple (abstract algebra)PhoneticsPerceptionHumansSegmentationAttentionmedia_commonCommunicationParsingbusiness.industryText segmentationLinguisticsMutual informationSemanticsConstructed languageNeuropsychology and Physiological PsychologySpeech PerceptionCuesProbability LearningPsychologybusinesscomputerWord (computer architecture)Memorycognition
researchProduct