Search results for "speech"

showing 10 items of 1281 documents

When nominal features are marked on verbs: A transcranial magnetic stimulation study

2006

It has been claimed that verb processing (as opposed to noun processing) is subserved by specific neural circuits in the left prefrontal cortex. In this study, we took advantage of the unusual grammatical characteristics of clitic pronouns in Italian (e.g., lo and la in portalo and portala 'bring it [masculine]/[feminine]', respectively)-the fact that clitics have both nominal and verbal characteristics, to explore the neural correlates of verb and clitic processing. We used repetitive transcranial magnetic stimulation (rTMS) to suppress the excitability of the left prefrontal cortex and to assess its role in producing verb+det+noun and verb+clitic phrases. Results showed an interference ef…

AdultLinguistics and LanguageCognitive Neurosciencemedicine.medical_treatmentPrefrontal CortexExperimental and Cognitive PsychologyVerbFunctional LateralityLanguage and LinguisticsMagneticsSpeech and HearingCliticNounReaction TimemedicineHumansPrefrontal cortexPsycholinguisticslanguageSyntaxElectric StimulationLinguisticsNoun phraseTranscranial magnetic stimulationItalyTMSLateralitycliticsPsychologyCognitive psychology
researchProduct

Measures of native and non-native rhythm in a quantity language.

2005

The traditional phonetic classification of language rhythm as stress-timed or syllable-timed is attributed to Pike. Recently, two different proposals have been offered for describing the rhythmic structure of languages from acoustic-phonetic measurements. Ramus has suggested a metric based on the proportion of vocalic intervals and the variability ( SD) of consonantal intervals. Grabe has proposed Pairwise Variability Indices (nPVI, rPVI) calculated from the differences in vocalic and consonantal durations between successive syllables. We have calculated both the Ramus and Grabe metrics for Latvian, traditionally considered a syllable rhythm language, and for Latvian as spoken by Russian l…

AdultLinguistics and LanguageSociology and Political ScienceAdolescentFirst languageMultilingualism050105 experimental psychologyLanguage and LinguisticsSpeech Acoustics030507 speech-language pathology & audiology03 medical and health sciencesSpeech and HearingRhythmPhoneticsHumansLearning0501 psychology and cognitive sciencesAcoustic phoneticsMathematicsPikecomputer.programming_languageAgedLanguagePsycholinguistics05 social sciencesIndo-European languagesLatvianPhoneticsGeneral MedicineMiddle Agedlanguage.human_languageLinguisticslanguageSpeech PerceptionSyllable0305 other medical sciencecomputerPsychoacousticsLanguage and speech
researchProduct

The German hearing in noise test

2020

The aims of this study were to develop a German Hearing In Noise Test (HINT) using the same methodology as with previous HINT tests; to develop sentence lists for measuring speech reception thresholds (SRTs); and to determine test-retest reliability and norms for measures obtained under headphones.The following steps were followed: develop and record sentences, synthesise masking noise, determine the performance-intensity (PI) function, equalise sentence difficulty in the masking noise. Form sentence lists of equal difficulty. Measure SRTs for normal hearing individuals to determine practice/learning effects, test-retest reliability, and norms.Three groups of adults (median age = 25 years) …

AdultLinguistics and Languagemedicine.medical_specialtySpeech perceptionComputer scienceAudiologyLanguage and LinguisticsGerman03 medical and health sciencesSpeech and Hearing0302 clinical medicineHearingmedicineHumansSpeech receptionSpeech reception threshold030223 otorhinolaryngologyLanguageSpeech Reception Threshold TestReproducibility of Resultslanguage.human_languageTest (assessment)NoiseSpeech Perceptionlanguage030217 neurology & neurosurgerySentenceInternational Journal of Audiology
researchProduct

Human voice pitch measures are robust across a variety of speech recordings: methodological and theoretical implications

2021

Fundamental frequency ( f o ), perceived as voice pitch, is the most sexually dimorphic, perceptually salient and intensively studied voice parameter in human nonverbal communication. Thousands of studies have linked human f o to biological and social speaker traits and life outcomes, from reproductive to economic. Critically, researchers have used myriad speech stimuli to measure f o and infer its functional relevance, from individual vowels to longer bouts of spontaneous speech. Here, we acoustically analysed f o in nearly 1000 affectively neutral speech utterances (vowels, words, counting, greetings, read paragraphs and free spontaneous speech) produced by the same 154 men and women, ag…

AdultMale0106 biological sciencesVoice pitchspeech[SHS.ANTHRO-BIO]Humanities and Social Sciences/Biological anthropologyBiology010603 evolutionary biology01 natural sciencesSpeech Acousticsbiomechanics03 medical and health sciencesNonverbal communicationsource-filter theoryHumanssexual selectionHuman voice030304 developmental biology0303 health sciencesCommunication[SHS.STAT]Humanities and Social Sciences/Methods and statisticsbusiness.industryevolution fundamental frequencyFundamental frequencyVariety (linguistics)Agricultural and Biological Sciences (miscellaneous)behaviourSalientSexual selection[SCCO.PSYC]Cognitive science/PsychologyVoicenonverbal communicationFemaleAnimal BehaviourGeneral Agricultural and Biological SciencesbusinessBiology Letters
researchProduct

Chromosome 15q BP3 to BP5 deletion is a likely locus for speech delay and language impairment: Report on a four‐member family and an unrelated boy

2020

Abstract Background Deletions in chromosome 15q13 have been reported both in healthy people and individuals with a wide range of behavioral and neuropsychiatric disturbances. Six main breakpoint (BP) subregions (BP1‐BP6) are mapped to the 15q13 region and three further embedded BP regions (BP3‐BP5). The deletion at BP4‐BP5 is the rearrangement most frequently observed compared to other known deletions in BP3‐BP5 and BP3‐BP4 regions. Deletions of each of these three regions have previously been implicated in a variable range of clinical phenotypes, including minor dysmorphism, developmental delay/intellectual disability, epilepsy, autism spectrum disorders, behavioral disturbances, and speec…

AdultMale0301 basic medicinespeech delayAdolescentlcsh:QH426-470BP3-BP5 deletionspeech delay.Chromosome DisordersLocus (genetics)030105 genetics & heredity03 medical and health sciencesEpilepsySettore MED/38 - Pediatria Generale E SpecialisticaSeizuresIntellectual DisabilityIntellectual disabilitychromosome 15 q13GeneticsmedicineHumansLanguage Development DisordersChildMolecular BiologyGenetics (clinical)GeneticsChromosomes Human Pair 15business.industryBreakpointlanguage impairmentOriginal Articlesmedicine.diseasePhenotypePedigreeBP3‐BP5 deletiondevelopmental delayLanguage developmentlcsh:GeneticsPhenotype030104 developmental biologyBP3-BP5 deletion; chromosome 15 q13; developmental delay; language impairment; speech delaySpeech delayAutismFemaleOriginal ArticleChromosome Deletionmedicine.symptombusinessMolecular Genetics & Genomic Medicine
researchProduct

Evidence for a spatial bias in the perception of sequences of brief tones

2013

Listeners are unable to report the physical order of particular sequences of brief tones. This phenomenon of temporal dislocation depends on tone durations and frequencies. The current study empirically shows that it also depends on the spatial location of the tones. Dichotically testing a three-tone sequence showed that the central tone tends to be reported as the first or the last element when it is perceived as part of a left-to-right motion. Since the central-tone dislocation does not occur for right-to-left sequences of the same tones, this indicates that there is a spatial bias in the perception of sequences. © 2013 Acoustical Society of America.

AdultMaleAcoustic Stimulation; Adult; Audiometry Pure-Tone; Dichotic Listening Tests; Female; Humans; Male; Pattern Recognition Physiological; Psychoacoustics; Time Factors; Young Adult; Pitch Perception; Time Perception; Acoustics and Ultrasonics; Arts and Humanities (miscellaneous); Medicine (all)Time FactorsAcoustics and UltrasonicsTime FactorSpeech recognitionAcousticsmedia_common.quotation_subjectspatial biasAcoustics and UltrasonicMotion (physics)Dichotic Listening TestsDichotic Listening TestTone (musical instrument)Young AdultPsychoacousticArts and Humanities (miscellaneous)Dislocation (syntax)PerceptionHumansspatial bias; temporal dislocationPsychoacousticstemporal dislocationPitch PerceptionMathematicsmedia_commonSequenceSettore INF/01 - InformaticaDichotic listeningMedicine (all)Time perceptionAcoustic StimulationPattern Recognition PhysiologicalTime PerceptionAudiometry Pure-ToneFemalePsychoacousticsHuman
researchProduct

Temporal weighting of loudness: Comparison between two different psychophysical tasks

2016

International audience; Psychophysical studies on loudness have so far examined the temporal weighting of loudness solely in level-discrimination tasks. Typically, listeners were asked to discriminate hundreds of level-fluctuating sounds regarding their global loudness. Temporal weights, i.e., the importance of each temporal portion of the stimuli for the loudness judgment, were then estimated from listeners' responses. Consistent non-uniform " u-shaped " temporal weighting patterns were observed, with greater weights assigned to the first and the last temporal portions of the stimuli, revealing significant primacy and recency effects, respectively. In this study, the question was addressed…

AdultMaleAcoustics and UltrasonicsLoudness PerceptionAcousticsSpeech recognitionDecision Making050105 experimental psychologyLoudnessTask (project management)JudgmentYoung Adult03 medical and health sciences0302 clinical medicineArts and Humanities (miscellaneous)Humans0501 psychology and cognitive sciencesMathematics[SPI.ACOU]Engineering Sciences [physics]/Acoustics [physics.class-ph]Analysis of VariancePsychological Tests05 social sciencesWeightingAcoustic Stimulation[SCCO.PSYC]Cognitive science/PsychologyFemaleNoisePerceptual Masking030217 neurology & neurosurgeryThe Journal of the Acoustical Society of America
researchProduct

Measuring and modeling real-time responses to music: the dynamics of tonality induction.

2003

We examined a variety of real-time responses evoked by a single piece of music, the organ Duetto BWV 805 by J S Bach. The primary data came from a concurrent probe-tone method in which the probe-tone is sounded continuously with the music. Listeners judged how well the probe tone fit with the music at each point in time. The process was repeated for all probe tones of the chromatic scale. A self-organizing map (SOM) [Kohonen 1997 Self-organizing Maps (Berlin: Springer)] was used to represent the developing and changing sense of key reflected in these judgments. The SOM was trained on the probe-tone profiles for 24 major and minor keys (Krumhansl and Kessler 1982 Psychological Review89 334–…

AdultMaleAcousticsSpeech recognitionExperimental and Cognitive Psychology050105 experimental psychology060404 musicPitch classTone (musical instrument)Artificial IntelligencePsychophysicsHumans0501 psychology and cognitive sciencesChromatic scaleTonalityPitch PerceptionMajor and minorSupertonic05 social sciences06 humanities and the artsScale (music)Sensory SystemsOphthalmologyDynamics (music)Auditory PerceptionFemalePsychology0604 artsMusicPerception
researchProduct

Interval between two sequential arrays determines their storage state in visual working memory.

2020

AbstractThe visual information can be stored as either “active” representations in the active state or “activity-silent” representations in the passive state during the retention period in visual working memory (VWM). Catering to the dynamic nature of visual world, we explored how the temporally dynamic visual input was stored in VWM. In the current study, the memory arrays were presented sequentially, and the contralateral delay activity (CDA), an electrophysiological measure, was used to identify whether the memory representations were transferred into the passive state. Participants were instructed to encode two sequential arrays and retrieve them respectively, with two conditions of int…

AdultMaleAdolescentComputer scienceSpeech recognitionlcsh:Medicinenäkömuisti050105 experimental psychologyArticle03 medical and health sciencesYoung Adult0302 clinical medicineMode (computer interface)HumansPsychology0501 psychology and cognitive sciencesAttentionlcsh:ScienceMultidisciplinaryWorking memorylcsh:R05 social sciencesBrainElectroencephalographytyömuistiTask (computing)Interval (music)Memory Short-TermVisual Perceptionlcsh:QFemaleState (computer science)030217 neurology & neurosurgeryPhotic StimulationScientific reports
researchProduct

Ability for Voice Recognition Is a Marker for Dyslexia in Children

2014

A recent voice recognition experiment conducted by Perrachione, Del Tufo, and Gabrieli (2011) revealed that, in normal adult readers, the accuracy at identifying human voices was better in the participants’ mother tongue than in an unfamiliar language, while this difference was absent in a group of adults with dyslexia. This pattern favored a view of dyslexia as due to “fundamentally impoverished native-language phonological representations.” To further examine this issue, we conducted two voice recognition experiments, one with children with/without dyslexia, and the other with adults with/without dyslexia. Results revealed that children/adults with dyslexia were less accurate at identify…

AdultMaleAdolescentSpeech recognitionFirst languageExperimental and Cognitive PsychologyPhonological deficitBiological theories of dyslexiaDevelopmental psychologyDyslexiaYoung AdultArts and Humanities (miscellaneous)PhoneticsmedicineHumansChildGeneral PsychologyLanguageDyslexiaMultisensory integrationRecognition PsychologyGeneral Medicinemedicine.diseaseVoiceFemalemedicine.symptomPsychologySurface dyslexiaCognitive psychologyExperimental Psychology
researchProduct