WorldWideScience

Sample records for human motor speech

  1. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  2. Auditory-motor interactions in pediatric motor speech disorders: neurocomputational modeling of disordered development.

    Science.gov (United States)

    Terband, H; Maassen, B; Guenther, F H; Brumberg, J

    2014-01-01

    Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Auditory-Motor Interactions in Pediatric Motor Speech Disorders: Neurocomputational Modeling of Disordered Development

    Science.gov (United States)

    Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.

    2014-01-01

    Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630

  4. Subcortical Contributions to Motor Speech: Phylogenetic, Developmental, Clinical.

    Science.gov (United States)

    Ziegler, W; Ackermann, H

    2017-08-01

    Vocal learning is an exclusively human trait among primates. However, songbirds demonstrate behavioral features resembling human speech learning. Two circuits have a preeminent role in this human behavior; namely, the corticostriatal and the cerebrocerebellar motor loops. While the striatal contribution can be traced back to the avian anterior forebrain pathway (AFP), the sensorimotor adaptation functions of the cerebellum appear to be human specific in acoustic communication. This review contributes to an ongoing discussion on how birdsong translates into human speech. While earlier approaches were focused on higher linguistic functions, we place the motor aspects of speaking at center stage. Genetic data are brought together with clinical and developmental evidence to outline the role of cerebrocerebellar and corticostriatal interactions in human speech. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  6. Modeling speech imitation and ecological learning of auditory-motor maps

    Directory of Open Access Journals (Sweden)

    Claudia eCanevari

    2013-06-01

    Full Text Available Classical models of speech consider an antero-posterior distinction between perceptive and productive functions. However, the selective alteration of neural activity in speech motor centers, via transcranial magnetic stimulation, was shown to affect speech discrimination. On the automatic speech recognition (ASR side, the recognition systems have classically relied solely on acoustic data, achieving rather good performance in optimal listening conditions. The main limitations of current ASR are mainly evident in the realistic use of such systems. These limitations can be partly reduced by using normalization strategies that minimize inter-speaker variability by either explicitly removing speakers’ peculiarities or adapting different speakers to a reference model. In this paper we aim at modeling a motor-based imitation learning mechanism in ASR. We tested the utility of a speaker normalization strategy that uses motor representations of speech and compare it with strategies that ignore the motor domain. Specifically, we first trained a regressor through state-of-the-art machine learning techniques to build an auditory-motor mapping, in a sense mimicking a human learner that tries to reproduce utterances produced by other speakers. This auditory-motor mapping maps the speech acoustics of a speaker into the motor plans of a reference speaker. Since, during recognition, only speech acoustics are available, the mapping is necessary to recover motor information. Subsequently, in a phone classification task, we tested the system on either one of the speakers that was used during training or a new one. Results show that in both cases the motor-based speaker normalization strategy almost always outperforms all other strategies where only acoustics is taken into account.

  7. The motor theory of speech perception revisited.

    Science.gov (United States)

    Massaro, Dominic W; Chen, Trevor H

    2008-04-01

    Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.

  8. Recognizing speech in a novel accent: the motor theory of speech perception reframed.

    Science.gov (United States)

    Moulin-Frier, Clément; Arbib, Michael A

    2013-08-01

    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory.

  9. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  10. Parent-child interaction in motor speech therapy.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Jethava, Vibhuti; Pukonen, Margit; Huynh, Anna; Goshulak, Debra; Kroll, Robert; van Lieshout, Pascal

    2018-01-01

    This study measures the reliability and sensitivity of a modified Parent-Child Interaction Observation scale (PCIOs) used to monitor the quality of parent-child interaction. The scale is part of a home-training program employed with direct motor speech intervention for children with speech sound disorders. Eighty-four preschool age children with speech sound disorders were provided either high- (2×/week/10 weeks) or low-intensity (1×/week/10 weeks) motor speech intervention. Clinicians completed the PCIOs at the beginning, middle, and end of treatment. Inter-rater reliability (Kappa scores) was determined by an independent speech-language pathologist who assessed videotaped sessions at the midpoint of the treatment block. Intervention sensitivity of the scale was evaluated using a Friedman test for each item and then followed up with Wilcoxon pairwise comparisons where appropriate. We obtained fair-to-good inter-rater reliability (Kappa = 0.33-0.64) for the PCIOs using only video-based scoring. Child-related items were more strongly influenced by differences in treatment intensity than parent-related items, where a greater number of sessions positively influenced parent learning of treatment skills and child behaviors. The adapted PCIOs is reliable and sensitive to monitor the quality of parent-child interactions in a 10-week block of motor speech intervention with adjunct home therapy. Implications for rehabilitation Parent-centered therapy is considered a cost effective method of speech and language service delivery. However, parent-centered models may be difficult to implement for treatments such as developmental motor speech interventions that require a high degree of skill and training. For children with speech sound disorders and motor speech difficulties, a translated and adapted version of the parent-child observation scale was found to be sufficiently reliable and sensitive to assess changes in the quality of the parent-child interactions during

  11. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  12. Motor laterality as an indicator of speech laterality.

    Science.gov (United States)

    Flowers, Kenneth A; Hudson, John M

    2013-03-01

    The determination of speech laterality, especially where it is anomalous, is both a theoretical issue and a practical problem for brain surgery. Handedness is commonly thought to be related to speech representation, but exactly how is not clearly understood. This investigation analyzed handedness by preference rating and performance on a reliable task of motor laterality in 34 patients undergoing a Wada test, to see if they could provide an indicator of speech laterality. Hand usage preference ratings divided patients into left, right, and mixed in preference. Between-hand differences in movement time on a pegboard task determined motor laterality. Results were correlated (χ2) with speech representation as determined by a standard Wada test. It was found that patients whose between-hand difference in speed on the motor task was small or inconsistent were the ones whose Wada test speech representation was likely to be ambiguous or anomalous, whereas all those with a consistently large between-hand difference showed clear unilateral speech representation in the hemisphere controlling the better hand (χ2 = 10.45, df = 1, p laterality are related where they both involve a central control of motor output sequencing and that a measure of that aspect of the former will indicate the likely representation of the latter. A between-hand measure of motor laterality based on such a measure may indicate the possibility of anomalous speech representation. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Oral motor deficits in speech-impaired children with autism

    Science.gov (United States)

    Belmonte, Matthew K.; Saxena-Chandhok, Tanushree; Cherian, Ruth; Muneer, Reema; George, Lisa; Karanth, Prathibha

    2013-01-01

    Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual. PMID:23847480

  14. Oral Motor Deficits in Speech-Impaired Children with Autism

    Directory of Open Access Journals (Sweden)

    Matthew K Belmonte

    2013-07-01

    Full Text Available Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive versus expressive speech / language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills and 90 (for oral motor skills typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual.

  15. Crosslinguistic Application of English-Centric Rhythm Descriptors in Motor Speech Disorders

    Science.gov (United States)

    Liss, Julie M.; Utianski, Rene; Lansford, Kaitlin

    2014-01-01

    Background Rhythmic disturbances are a hallmark of motor speech disorders, in which the motor control deficits interfere with the outward flow of speech and by extension speech understanding. As the functions of rhythm are language-specific, breakdowns in rhythm should have language-specific consequences for communication. Objective The goals of this paper are to (i) provide a review of the cognitive- linguistic role of rhythm in speech perception in a general sense and crosslinguistically; (ii) present new results of lexical segmentation challenges posed by different types of dysarthria in American English, and (iii) offer a framework for crosslinguistic considerations for speech rhythm disturbances in the diagnosis and treatment of communication disorders associated with motor speech disorders. Summary This review presents theoretical and empirical reasons for considering speech rhythm as a critical component of communication deficits in motor speech disorders, and addresses the need for crosslinguistic research to explore language-universal versus language-specific aspects of motor speech disorders. PMID:24157596

  16. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  17. Representational Similarity Analysis Reveals Heterogeneous Networks Supporting Speech Motor Control

    DEFF Research Database (Denmark)

    Zheng, Zane; Cusack, Rhodri; Johnsrude, Ingrid

    The everyday act of speaking involves the complex processes of speech motor control. One important feature of such control is regulation of articulation when auditory concomitants of speech do not correspond to the intended motor gesture. While theoretical accounts of speech monitoring posit...... multiple functional components required for detection of errors in speech planning (e.g., Levelt, 1983), neuroimaging studies generally indicate either single brain regions sensitive to speech production errors, or small, discrete networks. Here we demonstrate that the complex system controlling speech...... is supported by a complex neural network that is involved in linguistic, motoric and sensory processing. With the aid of novel real-time acoustic analyses and representational similarity analyses of fMRI signals, our data show functionally differentiated networks underlying auditory feedback control of speech....

  18. Motor Programming in Apraxia of Speech

    Science.gov (United States)

    Maas, Edwin; Robin, Donald A.; Wright, David L.; Ballard, Kirrie J.

    2008-01-01

    Apraxia of Speech (AOS) is an impairment of motor programming. However, the exact nature of this deficit remains unclear. The present study examined motor programming in AOS in the context of a recent two-stage model [Klapp, S. T. (1995). Motor response programming during simple and choice reaction time: The role of practice. "Journal of…

  19. Sensorimotor oscillations prior to speech onset reflect altered motor networks in adults who stutter

    Directory of Open Access Journals (Sweden)

    Anna-Maria Mersov

    2016-09-01

    Full Text Available Adults who stutter (AWS have demonstrated atypical coordination of motor and sensory regions during speech production. Yet little is known of the speech-motor network in AWS in the brief time window preceding audible speech onset. The purpose of the current study was to characterize neural oscillations in the speech-motor network during preparation for and execution of overt speech production in AWS using magnetoencephalography (MEG. Twelve AWS and twelve age-matched controls were presented with 220 words, each word embedded in a carrier phrase. Controls were presented with the same word list as their matched AWS participant. Neural oscillatory activity was localized using minimum-variance beamforming during two time periods of interest: speech preparation (prior to speech onset and speech execution (following speech onset. Compared to controls, AWS showed stronger beta (15-25Hz suppression in the speech preparation stage, followed by stronger beta synchronization in the bilateral mouth motor cortex. AWS also recruited the right mouth motor cortex significantly earlier in the speech preparation stage compared to controls. Exaggerated motor preparation is discussed in the context of reduced coordination in the speech-motor network of AWS. It is further proposed that exaggerated beta synchronization may reflect a more strongly inhibited motor system that requires a stronger beta suppression to disengage prior to speech initiation. These novel findings highlight critical differences in the speech-motor network of AWS that occur prior to speech onset and emphasize the need to investigate further the speech-motor assembly in the stuttering population.

  20. Auditory-Motor Interactions in Pediatric Motor Speech Disorders: Neurocomputational Modeling of Disordered Development

    NARCIS (Netherlands)

    Terband, H.R.; Maassen, B.A.M.; Guenther, F.H.; Brumberg, J.

    2014-01-01

    Background/Purpose: Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between

  1. Auditory-motor interactions in pediatric motor speech disorders: Neurocomputational modeling of disordered development

    NARCIS (Netherlands)

    Terband, H.; Maassen, B.; Guenther, F. H.; Brumberg, J.

    2014-01-01

    BACKGROUND/PURPOSE: Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between

  2. Communication Supports for People with Motor Speech Disorders

    Science.gov (United States)

    Hanson, Elizabeth K.; Fager, Susan K.

    2017-01-01

    Communication supports for people with motor speech disorders can include strategies and technologies to supplement natural speech efforts, resolve communication breakdowns, and replace natural speech when necessary to enhance participation in all communicative contexts. This article emphasizes communication supports that can enhance…

  3. Motor functions and adaptive behaviour in children with childhood apraxia of speech.

    Science.gov (United States)

    Tükel, Şermin; Björelius, Helena; Henningsson, Gunilla; McAllister, Anita; Eliasson, Ann Christin

    2015-01-01

    Undiagnosed motor and behavioural problems have been reported for children with childhood apraxia of speech (CAS). This study aims to understand the extent of these problems by determining the profile of and relationships between speech/non-speech oral, manual and overall body motor functions and adaptive behaviours in CAS. Eighteen children (five girls and 13 boys) with CAS, 4 years 4 months to 10 years 6 months old, participated in this study. The assessments used were the Verbal Motor Production Assessment for Children (VMPAC), Bruininks-Oseretsky Test of Motor Proficiency (BOT-2) and Adaptive Behaviour Assessment System (ABAS-II). Median result of speech/non-speech oral motor function was between -1 and -2 SD of the mean VMPAC norms. For BOT-2 and ABAS-II, the median result was between the mean and -1 SD of test norms. However, on an individual level, many children had co-occurring difficulties (below -1 SD of the mean) in overall and manual motor functions and in adaptive behaviour, despite few correlations between sub-tests. In addition to the impaired speech motor output, children displayed heterogeneous motor problems suggesting the presence of a global motor deficit. The complex relationship between motor functions and behaviour may partly explain the undiagnosed developmental difficulties in CAS.

  4. Philosophy of Research in Motor Speech Disorders

    Science.gov (United States)

    Weismer, Gary

    2006-01-01

    The primary objective of this position paper is to assess the theoretical and empirical support that exists for the Mayo Clinic view of motor speech disorders in general, and for oromotor, nonverbal tasks as a window to speech production processes in particular. Literature both in support of and against the Mayo clinic view and the associated use…

  5. Neuropharmacology of Poststroke Motor and Speech Recovery.

    Science.gov (United States)

    Keser, Zafer; Francisco, Gerard E

    2015-11-01

    Almost 7 million adult Americans have had a stroke. There is a growing need for more effective treatment options as add-ons to conventional therapies. This article summarizes the published literature for pharmacologic agents used for the enhancement of motor and speech recovery after stroke. Amphetamine, levodopa, selective serotonin reuptake inhibitors, and piracetam were the most commonly used drugs. Pharmacologic augmentation of stroke motor and speech recovery seems promising but systematic, adequately powered, randomized, and double-blind clinical trials are needed. At this point, the use of these pharmacologic agents is not supported by class I evidence. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Dopamine Regulation of Human Speech and Bird Song: A Critical Review

    Science.gov (United States)

    Simonyan, Kristina; Horwitz, Barry; Jarvis, Erich D.

    2012-01-01

    To understand the neural basis of human speech control, extensive research has been done using a variety of methodologies in a range of experimental models. Nevertheless, several critical questions about learned vocal motor control still remain open. One of them is the mechanism(s) by which neurotransmitters, such as dopamine, modulate speech and…

  7. Characteristics of motor speech phenotypes in multiple sclerosis.

    Science.gov (United States)

    Rusz, Jan; Benova, Barbora; Ruzickova, Hana; Novotny, Michal; Tykalova, Tereza; Hlavnicka, Jan; Uher, Tomas; Vaneckova, Manuela; Andelova, Michaela; Novotna, Klara; Kadrnozkova, Lucie; Horakova, Dana

    2018-01-01

    Motor speech disorders in multiple sclerosis (MS) are poorly understood and their quantitative, objective acoustic characterization remains limited. Additionally, little data regarding relationships between the severity of speech disorders and neurological involvement in MS, as well as the contribution of pyramidal and cerebellar functional systems on speech phenotypes, is available. Speech data were acquired from 141 MS patients with Expanded Disability Status Scale (EDSS) ranging from 1 to 6.5 and 70 matched healthy controls. Objective acoustic speech assessment including subtests on phonation, oral diadochokinesis, articulation and prosody was performed. The prevalence of dysarthria in our MS cohort was 56% while the severity was generally mild and primarily consisted of a combination of spastic and ataxic components. Prosodic-articulatory disorder presenting with monopitch, articulatory decay, excess loudness variations and slow rate was the most salient. Speech disorders reflected subclinical motor impairment with 78% accuracy in discriminating between a subgroup of asymptomatic MS (EDSS oral diadochokinesis and the 9-Hole Peg Test (r = - 0.65, p oral diadochokinesis and excess loudness variations significantly separated pure pyramidal and mixed pyramidal-cerebellar MS subgroups. Automated speech analyses may provide valuable biomarkers of disease progression in MS as dysarthria represents common and early manifestation that reflects disease disability and underlying pyramidal-cerebellar pathophysiology. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Psycholinguistic and motor theories of apraxia of speech.

    Science.gov (United States)

    Ziegler, Wolfram

    2002-11-01

    This article sketches the relationships between modern conceptions of apraxia of speech (AOS) and current models of neuromotor and neurolinguistic disorders. The first section is devoted to neurophysiological perspectives of AOS, and its relation to dysarthrias and to limb apraxia is discussed. The second section introduces the logogen model and considers AOS in relation to supramodal aspects of aphasia. In the third section, AOS with the background of psycholinguistic models of spoken language production, including the Levelt model and connectionist models, is discussed. In the fourth section, the view of AOS as a disorder of speech motor programming is discussed against the background of theories from experimental psychology. The final section considers two models of speech motor control and their relation to AOS. The article discusses the strengths and weaknesses of these approaches.

  9. Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt.

    Science.gov (United States)

    Hickok, Gregory; Buchsbaum, Bradley; Humphries, Colin; Muftuler, Tugan

    2003-07-01

    The concept of auditory-motor interaction pervades speech science research, yet the cortical systems supporting this interface have not been elucidated. Drawing on experimental designs used in recent work in sensory-motor integration in the cortical visual system, we used fMRI in an effort to identify human auditory regions with both sensory and motor response properties, analogous to single-unit responses in known visuomotor integration areas. The sensory phase of the task involved listening to speech (nonsense sentences) or music (novel piano melodies); the "motor" phase of the task involved covert rehearsal/humming of the auditory stimuli. A small set of areas in the superior temporal and temporal-parietal cortex responded both during the listening phase and the rehearsal/humming phase. A left lateralized region in the posterior Sylvian fissure at the parietal-temporal boundary, area Spt, showed particularly robust responses to both phases of the task. Frontal areas also showed combined auditory + rehearsal responsivity consistent with the claim that the posterior activations are part of a larger auditory-motor integration circuit. We hypothesize that this circuit plays an important role in speech development as part of the network that enables acoustic-phonetic input to guide the acquisition of language-specific articulatory-phonetic gestures; this circuit may play a role in analogous musical abilities. In the adult, this system continues to support aspects of speech production, and, we suggest, supports verbal working memory.

  10. Age differences in the motor control of speech: An fMRI study of healthy aging.

    Science.gov (United States)

    Tremblay, Pascale; Sato, Marc; Deschamps, Isabelle

    2017-05-01

    Healthy aging is associated with a decline in cognitive, executive, and motor processes that are concomitant with changes in brain activation patterns, particularly at high complexity levels. While speech production relies on all these processes, and is known to decline with age, the mechanisms that underlie these changes remain poorly understood, despite the importance of communication on everyday life. In this cross-sectional group study, we investigated age differences in the neuromotor control of speech production by combining behavioral and functional magnetic resonance imaging (fMRI) data. Twenty-seven healthy adults underwent fMRI while performing a speech production task consisting in the articulation of nonwords of different sequential and motor complexity. Results demonstrate strong age differences in movement time (MT), with longer and more variable MT in older adults. The fMRI results revealed extensive age differences in the relationship between BOLD signal and MT, within and outside the sensorimotor system. Moreover, age differences were also found in relation to sequential complexity within the motor and attentional systems, reflecting both compensatory and de-differentiation mechanisms. At very high complexity level (high motor complexity and high sequence complexity), age differences were found in both MT data and BOLD response, which increased in several sensorimotor and executive control areas. Together, these results suggest that aging of motor and executive control mechanisms may contribute to age differences in speech production. These findings highlight the importance of studying functionally relevant behavior such as speech to understand the mechanisms of human brain aging. Hum Brain Mapp 38:2751-2771, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Motor Speech Sequence Learning in Adults Who Stutter

    Directory of Open Access Journals (Sweden)

    Mahsa Aghazamani

    2018-04-01

    Conclusion The results of this study showed that PWS show improvement in accuracy, reaction time and sequence duration variables from day 1 to day 3. Also, PWS show more substantial number of errors compared to PNS, but this difference was not significant between the two groups. Similar results were obtained for the reaction time. Results of this study demonstrated that PWS show slower sequence duration compared to PNS. Some studies suggested that this could be because people who stutter use a control strategy to reduce the number of errors, although many studies suggested that this may indicate motor learning. According to speech motor skills hypothesis, it can be concluded that people who stutter have limitations in motor speech learning abilities. The findings of the present study could have clinical implication for the treatment of stuttering.

  12. Are mirror neurons the basis of speech perception? Evidence from five cases with damage to the purported human mirror system

    Science.gov (United States)

    Rogalsky, Corianne; Love, Tracy; Driscoll, David; Anderson, Steven W.; Hickok, Gregory

    2013-01-01

    The discovery of mirror neurons in macaque has led to a resurrection of motor theories of speech perception. Although the majority of lesion and functional imaging studies have associated perception with the temporal lobes, it has also been proposed that the ‘human mirror system’, which prominently includes Broca’s area, is the neurophysiological substrate of speech perception. Although numerous studies have demonstrated a tight link between sensory and motor speech processes, few have directly assessed the critical prediction of mirror neuron theories of speech perception, namely that damage to the human mirror system should cause severe deficits in speech perception. The present study measured speech perception abilities of patients with lesions involving motor regions in the left posterior frontal lobe and/or inferior parietal lobule (i.e., the proposed human ‘mirror system’). Performance was at or near ceiling in patients with fronto-parietal lesions. It is only when the lesion encroaches on auditory regions in the temporal lobe that perceptual deficits are evident. This suggests that ‘mirror system’ damage does not disrupt speech perception, but rather that auditory systems are the primary substrate for speech perception. PMID:21207313

  13. A Motor Speech Assessment for Children with Severe Speech Disorders: Reliability and Validity Evidence

    Science.gov (United States)

    Strand, Edythe A.; McCauley, Rebecca J.; Weigand, Stephen D.; Stoeckel, Ruth E.; Baas, Becky S.

    2013-01-01

    Purpose: In this article, the authors report reliability and validity evidence for the Dynamic Evaluation of Motor Speech Skill (DEMSS), a new test that uses dynamic assessment to aid in the differential diagnosis of childhood apraxia of speech (CAS). Method: Participants were 81 children between 36 and 79 months of age who were referred to the…

  14. Speech Motor Programming in Apraxia of Speech: Evidence from a Delayed Picture-Word Interference Task

    Science.gov (United States)

    Mailend, Marja-Liisa; Maas, Edwin

    2013-01-01

    Purpose: Apraxia of speech (AOS) is considered a speech motor programming impairment, but the specific nature of the impairment remains a matter of debate. This study investigated 2 hypotheses about the underlying impairment in AOS framed within the Directions Into Velocities of Articulators (DIVA; Guenther, Ghosh, & Tourville, 2006) model: The…

  15. The role of human parietal area 7A as a link between sequencing in hand actions and in overt speech production

    Directory of Open Access Journals (Sweden)

    Stefan eHeim

    2012-12-01

    Full Text Available Research on the evolutionary basis of the human language faculty has proposed the mirror neuron system as a link between motor processing and speech development. Consequently, most work has focussed on the left inferior frontal cortex, in particular Broca's region, and the left inferior parietal cortex. However, the direct link between planning of hand motor and speech actions remains to be elucidated. Thus, the present study investigated whether sequencing of hand motor actions vs. speech motor actions has a common neural denominator. For the hand motor task, 25 subjects performed single, repeated, or sequenced button presses with either the left or right hand. The speech task was in analogy; the same subjects produced the syllable "po" once or repeatedly, or a sequence of different syllables (po-pi-po. Speech motor vs. hand motor effectors resulted in increased perisylvian activation including Broca's region (left area 44 and areas medially adjacent to left area 45. In contrast, common activation for sequenced vs. repeated production of button presses and syllables revealed the effector-independent involvement of left area 7A in the superior parietal lobule (SPL in sequencing. These data demonstrate that sequencing of vocal gestures, an important precondition for ordered utterances and ultimately human speech, shares area 7A, rather than inferior parietal regions, as a common cortical module with hand motor sequencing. Interestingly, area 7A has previously also been shown to be involved in the observation of hand and non-hand actions. In combination with the literature, the present data thus suggest a distinction between area 44, which is specifically recruited for (cognitive aspects of speech, and SPL area 7A for general aspects of motor sequencing. In sum, the study demonstrates a yet little considered role of the superior parietal lobule in the origins of speech, and may be discussed in the light of embodiment of speech and language in the

  16. Influence of Language Load on Speech Motor Skill in Children With Specific Language Impairment.

    Science.gov (United States)

    Saletta, Meredith; Goffman, Lisa; Ward, Caitlin; Oleson, Jacob

    2018-03-15

    Children with specific language impairment (SLI) show particular deficits in the generation of sequenced action: the quintessential procedural task. Practiced imitation of a sequence may become rote and require reduced procedural memory. This study explored whether speech motor deficits in children with SLI occur generally or only in conditions of high linguistic load, whether speech motor deficits diminish with practice, and whether it is beneficial to incorporate conditions of high load to understand speech production. Children with SLI and typical development participated in a syntactic priming task during which they generated sentences (high linguistic load) and, then, practiced repeating a sentence (low load) across 3 sessions. We assessed phonetic accuracy, speech movement variability, and duration. Children with SLI produced more variable articulatory movements than peers with typical development in the high load condition. The groups converged in the low load condition. Children with SLI continued to show increased articulatory stability over 3 practice sessions. Both groups produced generated sentences with increased duration and variability compared with repeated sentences. Linguistic demands influence speech motor production. Children with SLI show reduced speech motor performance in tasks that require language generation but not when task demands are reduced in rote practice.

  17. Language and motor speech skills in children with cerebral palsy

    NARCIS (Netherlands)

    Pirila, Sija; van der Meere, Jaap; Pentikainen, Taina; Ruusu-Niemi, Pirjo; Korpela, Raija; Kilpinen, Jenni; Nieminen, Pirkko; Ruusu-Niemin, P; Kilpinen, R

    2007-01-01

    The aim of the study was to investigate associations between the severity of motor limitations, cognitive difficulties, language and motor speech problems in children with cerebral palsy. Also, the predictive power of neonatal cranial ultrasound findings on later outcome was investigated. For this

  18. Bridging computational approaches to speech production: The semantic–lexical–auditory–motor model (SLAM)

    Science.gov (United States)

    Hickok, Gregory

    2017-01-01

    Speech production is studied from both psycholinguistic and motor-control perspectives, with little interaction between the approaches. We assessed the explanatory value of integrating psycholinguistic and motor-control concepts for theories of speech production. By augmenting a popular psycholinguistic model of lexical retrieval with a motor-control-inspired architecture, we created a new computational model to explain speech errors in the context of aphasia. Comparing the model fits to picture-naming data from 255 aphasic patients, we found that our new model improves fits for a theoretically predictable subtype of aphasia: conduction. We discovered that the improved fits for this group were a result of strong auditory-lexical feedback activation, combined with weaker auditory-motor feedforward activation, leading to increased competition from phonologically related neighbors during lexical selection. We discuss the implications of our findings with respect to other extant models of lexical retrieval. PMID:26223468

  19. Speech motor coordination in Dutch-speaking children with DAS studied with EMMA

    NARCIS (Netherlands)

    Nijland, L.; Maassen, B.A.M.; Hulstijn, W.; Peters, H.F.M.

    2004-01-01

    Developmental apraxia of speech (DAS) is generally classified as a 'speech motor' disorder. Direct measurement of articulatory movement is, however, virtually non-existent. In the present study we investigated the coordination between articulators in children with DAS using kinematic measurements.

  20. Childhood apraxia of speech and multiple phonological disorders in Cairo-Egyptian Arabic speaking children: language, speech, and oro-motor differences.

    Science.gov (United States)

    Aziz, Azza Adel; Shohdi, Sahar; Osman, Dalia Mostafa; Habib, Emad Iskander

    2010-06-01

    Childhood apraxia of speech is a neurological childhood speech-sound disorder in which the precision and consistency of movements underlying speech are impaired in the absence of neuromuscular deficits. Children with childhood apraxia of speech and those with multiple phonological disorder share some common phonological errors that can be misleading in diagnosis. This study posed a question about a possible significant difference in language, speech and non-speech oral performances between children with childhood apraxia of speech, multiple phonological disorder and normal children that can be used for a differential diagnostic purpose. 30 pre-school children between the ages of 4 and 6 years served as participants. Each of these children represented one of 3 possible subject-groups: Group 1: multiple phonological disorder; Group 2: suspected cases of childhood apraxia of speech; Group 3: control group with no communication disorder. Assessment procedures included: parent interviews; testing of non-speech oral motor skills and testing of speech skills. Data showed that children with suspected childhood apraxia of speech showed significantly lower language score only in their expressive abilities. Non-speech tasks did not identify significant differences between childhood apraxia of speech and multiple phonological disorder groups except for those which required two sequential motor performances. In speech tasks, both consonant and vowel accuracy were significantly lower and inconsistent in childhood apraxia of speech group than in the multiple phonological disorder group. Syllable number, shape and sequence accuracy differed significantly in the childhood apraxia of speech group than the other two groups. In addition, children with childhood apraxia of speech showed greater difficulty in processing prosodic features indicating a clear need to address these variables for differential diagnosis and treatment of children with childhood apraxia of speech. Copyright (c

  1. Early Speech Motor Development: Cognitive and Linguistic Considerations

    Science.gov (United States)

    Nip, Ignatius S. B.; Green, Jordan R.; Marx, David B.

    2009-01-01

    This longitudinal investigation examines developmental changes in orofacial movements occurring during the early stages of communication development. The goals were to identify developmental trends in early speech motor performance and to determine how these trends differ across orofacial behaviors thought to vary in cognitive and linguistic…

  2. A novel method for assessing the development of speech motor function in toddlers with autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Katherine eSullivan

    2013-03-01

    Full Text Available There is increasing evidence to show that indicators other than socio-cognitive abilities might predict communicative function in Autism Spectrum Disorders (ASD. A potential area of research is the development of speech motor function in toddlers. Utilizing a novel measure called ‘articulatory features’, we assess the abilities of toddlers to produce sounds at different timescales as a metric of their speech motor skills. In the current study, we examined 1 whether speech motor function differed between toddlers with ASD, developmental delay, and typical development; and 2 whether differences in speech motor function are correlated with standard measures of language in toddlers with ASD. Our results revealed significant differences between a subgroup of the ASD population with poor verbal skills, and the other groups for the articulatory features associated with the shortest time scale, namely place of articulation, (p<0.05. We also found significant correlations between articulatory features and language and motor ability as assessed by the Mullen and the Vineland scales for the ASD group. Our findings suggest that articulatory features may be an additional measure of speech motor function that could potentially be useful as an early risk indicator of ASD.

  3. fMRI of the motor speech center using EPI

    International Nuclear Information System (INIS)

    Yu, In Kyu; Chang, Kee Hyun; Song, In Chan; Kim, Hong Dae; Seong, Su Ok; Jang, Hyun Jung; Han, Moon Hee; Lee, Sang Kun

    1998-01-01

    The purpose of this study is to evaluate the feasibility of functional MR imaging (fMRI) using the echo planar imaging (EPI) technique to map the motor speech center and to provide the basic data for motor speech fMRI during internal word generations. This study involved ten young, healthy, right-handed volunteers (M:F=8:2; age: 21-27); a 1.5T whole body scanner with multislice EPI was used. Brain activation was mapped using gradient echo single shot EPI (TR/TE of 3000/40, slice numbers 6, slice thicknesses mm, no interslice gap, matrix numbers 128 x 128, and FOV 30 x 30). The paradigm consisted of a series of alternating rest and activation tasks, repeated eight times. During the rest task, each of ten Korean nouns composed of two to four syllables was shown continuously every 3 seconds. The subjects were required to see the words but not to generate speech, whereas during the activation task, they were asked to internally generate as many words as possible from each of ten non-concrete one-syllabled Korean letters shown on the screen every 3 seconds. During an eight-minute period, a total of 960 axial images were acquired in each subject. Data were analyzed using the Z-score (p<0.05), and following color processing, the activated signals were overlapped on T1-weighted images. The location of the activated area, mean activated signal intensity were evaluated. The results of this study indicate that in most subjects, fMRI using EPI can effectively map the motor speech center. The data obtained may be useful for the clinical application of fMRI. (author). 34 refs., 3 tabs., 5 figs

  4. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  5. Quantitative assessment of motor speech abnormalities in idiopathic rapid eye movement sleep behaviour disorder.

    Science.gov (United States)

    Rusz, Jan; Hlavnička, Jan; Tykalová, Tereza; Bušková, Jitka; Ulmanová, Olga; Růžička, Evžen; Šonka, Karel

    2016-03-01

    Patients with idiopathic rapid eye movement sleep behaviour disorder (RBD) are at substantial risk for developing Parkinson's disease (PD) or related neurodegenerative disorders. Speech is an important indicator of motor function and movement coordination, and therefore may be an extremely sensitive early marker of changes due to prodromal neurodegeneration. Speech data were acquired from 16 RBD subjects and 16 age- and sex-matched healthy control subjects. Objective acoustic assessment of 15 speech dimensions representing various phonatory, articulatory, and prosodic deviations was performed. Statistical models were applied to characterise speech disorders in RBD and to estimate sensitivity and specificity in differentiating between RBD and control subjects. Some form of speech impairment was revealed in 88% of RBD subjects. Articulatory deficits were the most prominent findings in RBD. In comparison to controls, the RBD group showed significant alterations in irregular alternating motion rates (p = 0.009) and articulatory decay (p = 0.01). The combination of four distinctive speech dimensions, including aperiodicity, irregular alternating motion rates, articulatory decay, and dysfluency, led to 96% sensitivity and 79% specificity in discriminating between RBD and control subjects. Speech impairment was significantly more pronounced in RBD subjects with the motor score of the Unified Parkinson's Disease Rating Scale greater than 4 points when compared to other RBD individuals. Simple quantitative speech motor measures may be suitable for the reliable detection of prodromal neurodegeneration in subjects with RBD, and therefore may provide important outcomes for future therapy trials. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. "The Caterpillar": A Novel Reading Passage for Assessment of Motor Speech Disorders

    Science.gov (United States)

    Patel, Rupal; Connaghan, Kathryn; Franco, Diana; Edsall, Erika; Forgit, Dory; Olsen, Laura; Ramage, Lianna; Tyler, Emily; Russell, Scott

    2013-01-01

    Purpose: A review of the salient characteristics of motor speech disorders and common assessment protocols revealed the need for a novel reading passage tailored specifically to differentiate between and among the dysarthrias (DYSs) and apraxia of speech (AOS). Method: "The Caterpillar" passage was designed to provide a contemporary, easily read,…

  7. A comparison of sensory-motor activity during speech in first and second languages.

    Science.gov (United States)

    Simmonds, Anna J; Wise, Richard J S; Dhanjal, Novraj S; Leech, Robert

    2011-07-01

    A foreign language (L2) learned after childhood results in an accent. This functional neuroimaging study investigated speech in L2 as a sensory-motor skill. The hypothesis was that there would be an altered response in auditory and somatosensory association cortex, specifically the planum temporale and parietal operculum, respectively, when speaking in L2 relative to L1, independent of rate of speaking. These regions were selected for three reasons. First, an influential computational model proposes that these cortices integrate predictive feedforward and postarticulatory sensory feedback signals during articulation. Second, these adjacent regions (known as Spt) have been identified as a "sensory-motor interface" for speech production. Third, probabilistic anatomical atlases exist for these regions, to ensure the analyses are confined to sensory-motor differences between L2 and L1. The study used functional magnetic resonance imaging (fMRI), and participants produced connected overt speech. The first hypothesis was that there would be greater activity in the planum temporale and the parietal operculum when subjects spoke in L2 compared with L1, one interpretation being that there is less efficient postarticulatory sensory monitoring when speaking in the less familiar L2. The second hypothesis was that this effect would be observed in both cerebral hemispheres. Although Spt is considered to be left-lateralized, this is based on studies of covert speech, whereas overt speech is accompanied by sensory feedback to bilateral auditory and somatosensory cortices. Both hypotheses were confirmed by the results. These findings provide the basis for future investigations of sensory-motor aspects of language learning using serial fMRI studies.

  8. Infants' brain responses to speech suggest analysis by synthesis.

    Science.gov (United States)

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-05

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  9. Speech therapy in adolescents with Down syndrome: In pursuit of communication as a fundamental human right.

    Science.gov (United States)

    Rvachew, Susan; Folden, Marla

    2018-02-01

    The achievement of speech intelligibility by persons with Down syndrome facilitates their participation in society. Denial of speech therapy services by virtue of low cognitive skills is a violation of their fundamental human rights as proclaimed in the Universal Declaration of Human Rights in general and in Article 19 in particular. Here, we describe the differential response of an adolescent with Down syndrome to three speech therapy interventions and demonstrate the use of a single subject randomisation design to identify effective treatments for children with complex communication disorders. Over six weeks, 18 speech therapy sessions were provided with treatment conditions randomly assigned to targets and sessions within weeks, specifically comparing auditory-motor integration prepractice and phonological planning prepractice to a control condition that included no prepractice. All treatments involved high intensity practice of nonsense word targets paired with tangible referents. A measure of generalisation from taught words to untaught real words in phrases revealed superior learning in the auditory-motor integration condition. The intervention outcomes may serve to justify the provision of appropriate supports to persons with Down syndrome so that they may achieve their full potential to receive information and express themselves.

  10. Functional MRI of motor speech area combined with motor stimulation during resting period

    International Nuclear Information System (INIS)

    Lim, Yeong Su; Park, Hark Hoon; Chung, Gyung Ho; Lee, Sang Yong; Chon, Su Bin; Kang, Shin Hwa

    1999-01-01

    To evaluate functional MR imaging of the motor speech area with and without motor stimulation during the rest period. Nine healthy, right-handed volunteers(M:F=7:2, age:21-40years) were included in this study. Brain activity was mapped using a multislice, gradient echo single shot EPI on a 1.5T MR scanner. The paradigm consisted on a series of alternating rest and activation tasks, performed six times. Each volunteer in the first study(group A) was given examples of motor stimulation during the rest period, while each in the second study(group B) was not given examples of a rest period. Motor stimulation in group A was achieved by continuously flexing five fingers of the right hand. In both groups, maximum internal word generation was achieved during the activation period. Using fMRI analysis software(Stimulate 5.0) and a cross-correlation method(backgroud threshold, 200; correlation threshold, 0.3; ceiling, 1.0; floor, 0.3; minimal count, 3), functional images were analysed. After correlating the activated foci and a time-signal intensity curve, the activated brain cortex and number of pixels were analysed and compared between the two tasks. The t-test was used for statistical analysis. In all nine subjects in group A and B, activation was observed in and adjacent to the left Broca's area. The mean number of activated pixels was 31.6 in group A and 27.8 in group B, a difference which was not statistically significant(P>0.1). Activities in and adjacent to the right Broca's area were seen in seven of group A and four of group B. The mean number of activated pixels was 14.9 in group A and 18 in group B. Eight of nine volunteers in group A showed activity in the left primary motor area with negative correlation to the time-signal intensity curve. The mean number of activated pixels for this group was 17.5. In three volonteers, activation in the right primary motor area was also observed, the mean number of activated pixels in these cases was 10.0. During the rest

  11. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  12. Sensory-motor relationships in speech production in post-lingually deaf cochlear-implanted adults and normal-hearing seniors: Evidence from phonetic convergence and speech imitation.

    Science.gov (United States)

    Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc

    2017-07-01

    Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The Influence of Psycholinguistic Variables on Articulatory Errors in Naming in Progressive Motor Speech Degeneration

    Science.gov (United States)

    Code, Chris; Tree, Jeremy; Ball, Martin

    2011-01-01

    We describe an analysis of speech errors on a confrontation naming task in a man with progressive speech degeneration of 10-year duration from Pick's disease. C.S. had a progressive non-fluent aphasia together with a motor speech impairment and early assessment indicated some naming impairments. There was also an absence of significant…

  14. [Surgical treatment of eloquent brain area tumors using neurophysiological mapping of the speech and motor areas and conduction tracts].

    Science.gov (United States)

    Zuev, A A; Korotchenko, E N; Ivanova, D S; Pedyash, N V; Teplykh, B A

    To evaluate the efficacy of intraoperative neurophysiological mapping in removing eloquent brain area tumors (EBATs). Sixty five EBAT patients underwent surgical treatment using intraoperative neurophysiological mapping at the Pirogov National Medical and Surgical Center in the period from 2014 to 2015. On primary neurological examination, 46 (71%) patients were detected with motor deficits of varying severity. Speech disorders were diagnosed in 17 (26%) patients. Sixteen patients with concomitant or isolated lesions of the speech centers underwent awake surgery using the asleep-awake-asleep protocol. Standard neurophysiological monitoring included transcranial stimulation as well as motor and, if necessary, speech mapping. The motor and speech areas were mapped with allowance for the preoperative planning data (obtained with a navigation station) synchronized with functional MRI. In this case, a broader representation of the motor and speech centers was revealed in 12 (19%) patients. During speech mapping, no speech disorders were detected in 7 patients; in 9 patients, stimulation of the cerebral cortex in the intended surgical area induced motor (3 patients), sensory (4), and amnesic (2) aphasia. In the total group, we identified 11 patients in whom the tumor was located near the internal capsule. Upon mapping of the conduction tracts in the internal capsule area, the stimulus strength during tumor resection was gradually decreased from 10 mA to 5 mA. Tumor resection was stopped when responses retained at a stimulus strength of 5 mA, which, when compared to the navigation data, corresponded to a distance of about 5 mm to the internal capsule. Completeness of tumor resection was evaluated (contrast-enhanced MRI) in all patients on the first postoperative day. According to the control MRI data, the tumor was resected totally in 60% of patients, subtotally in 24% of patients, and partially in 16% of patients. In the early postoperative period, the development or

  15. Selective left, right and bilateral stimulation of subthalamic nuclei in Parkinson's disease: differential effects on motor, speech and language function.

    Science.gov (United States)

    Schulz, Geralyn M; Hosey, Lara A; Bradberry, Trent J; Stager, Sheila V; Lee, Li-Ching; Pawha, Rajesh; Lyons, Kelly E; Metman, Leo Verhagen; Braun, Allen R

    2012-01-01

    Deep brain stimulation (DBS) of the subthalamic nucleus improves the motor symptoms of Parkinson's disease, but may produce a worsening of speech and language performance at rates and amplitudes typically selected in clinical practice. The possibility that these dissociated effects might be modulated by selective stimulation of left and right STN has never been systematically investigated. To address this issue, we analyzed motor, speech and language functions of 12 patients implanted with bilateral stimulators configured for optimal motor responses. Behavioral responses were quantified under four stimulator conditions: bilateral DBS, right-only DBS, left-only DBS and no DBS. Under bilateral and left-only DBS conditions, our results exhibited a significant improvement in motor symptoms but worsening of speech and language. These findings contribute to the growing body of literature demonstrating that bilateral STN DBS compromises speech and language function and suggests that these negative effects may be principally due to left-sided stimulation. These findings may have practical clinical consequences, suggesting that clinicians might optimize motor, speech and language functions by carefully adjusting left- and right-sided stimulation parameters.

  16. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex.

    Science.gov (United States)

    Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.

  17. Knowing beans: Human mirror mechanisms revealed through motor adaptation

    Directory of Open Access Journals (Sweden)

    Arthur M Glenberg

    2010-11-01

    Full Text Available Human mirror mechanisms (MMs respond during both performed and observed action and appear to underlie action goal recognition. We introduce a behavioral procedure for discovering and clarifying functional MM properties: Blindfolded participants repeatedly move beans either toward or away from themselves to induce motor adaptation. Then, the bias for perceiving direction of ambiguous visual movement in depth is measured. Bias is affected by a number of beans moved, b movement direction, and c similarity of the visual stimulus to the hand used to move beans. This cross-modal adaptation pattern supports both the validity of human MMs and functionality of our testing instrument. We also discuss related work that extends the motor adaptation paradigm to investigate contributions of MMs to speech perception and language comprehension.

  18. Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex

    Directory of Open Access Journals (Sweden)

    Kenji Ibayashi

    2018-04-01

    Full Text Available Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA, local field potential (LFP, and electrocorticography (ECoG are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC, we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs.

  19. When will a stuttering moment occur? The determining role of speech motor preparation.

    Science.gov (United States)

    Vanhoutte, Sarah; Cosyns, Marjan; van Mierlo, Pieter; Batens, Katja; Corthals, Paul; De Letter, Miet; Van Borsel, John; Santens, Patrick

    2016-06-01

    The present study aimed to evaluate whether increased activity related to speech motor preparation preceding fluently produced words reflects a successful compensation strategy in stuttering. For this purpose, a contingent negative variation (CNV) was evoked during a picture naming task and measured by use of electro-encephalography. A CNV is a slow, negative event-related potential known to reflect motor preparation generated by the basal ganglia-thalamo-cortical (BGTC) - loop. In a previous analysis, the CNV of 25 adults with developmental stuttering (AWS) was significantly increased, especially over the right hemisphere, compared to the CNV of 35 fluent speakers (FS) when both groups were speaking fluently (Vanhoutte et al., (2015) doi: 10.1016/j.neuropsychologia.2015.05.013). To elucidate whether this increase is a compensation strategy enabling fluent speech in AWS, the present analysis evaluated the CNV of 7 AWS who stuttered during this picture naming task. The CNV preceding AWS stuttered words was statistically compared to the CNV preceding AWS fluent words and FS fluent words. Though no difference emerged between the CNV of the AWS stuttered words and the FS fluent words, a significant reduction was observed when comparing the CNV preceding AWS stuttered words to the CNV preceding AWS fluent words. The latter seems to confirm the compensation hypothesis: the increased CNV prior to AWS fluent words is a successful compensation strategy, especially when it occurs over the right hemisphere. The words are produced fluently because of an enlarged activity during speech motor preparation. The left CNV preceding AWS stuttered words correlated negatively with stuttering frequency and severity suggestive for a link between the left BGTC - network and the stuttering pathology. Overall, speech motor preparatory activity generated by the BGTC - loop seems to have a determining role in stuttering. An important divergence between left and right hemisphere is

  20. Infant and Toddler Oral- and Manual-Motor Skills Predict Later Speech Fluency in Autism

    Science.gov (United States)

    Gernsbacher, Morton Ann; Sauer, Eve A.; Geye, Heather M.; Schweigert, Emily K.; Goldsmith, H. Hill

    2008-01-01

    Background: Spoken and gestural communication proficiency varies greatly among autistic individuals. Three studies examined the role of oral- and manual-motor skill in predicting autistic children's speech development. Methods: Study 1 investigated whether infant and toddler oral- and manual-motor skills predict middle childhood and teenage speech…

  1. Motor speech signature of behavioral variant frontotemporal dementia: Refining the phenotype.

    Science.gov (United States)

    Vogel, Adam P; Poole, Matthew L; Pemberton, Hugh; Caverlé, Marja W J; Boonstra, Frederique M C; Low, Essie; Darby, David; Brodtmann, Amy

    2017-08-22

    To provide a comprehensive description of motor speech function in behavioral variant frontotemporal dementia (bvFTD). Forty-eight individuals (24 bvFTD and 24 age- and sex-matched healthy controls) provided speech samples. These varied in complexity and thus cognitive demand. Their language was assessed using the Progressive Aphasia Language Scale and verbal fluency tasks. Speech was analyzed perceptually to describe the nature of deficits and acoustically to quantify differences between patients with bvFTD and healthy controls. Cortical thickness and subcortical volume derived from MRI scans were correlated with speech outcomes in patients with bvFTD. Speech of affected individuals was significantly different from that of healthy controls. The speech signature of patients with bvFTD is characterized by a reduced rate (75%) and accuracy (65%) on alternating syllable production tasks, and prosodic deficits including reduced speech rate (45%), prolonged intervals (54%), and use of short phrases (41%). Groups differed on acoustic measures derived from the reading, unprepared monologue, and diadochokinetic tasks but not the days of the week or sustained vowel tasks. Variability of silence length was associated with cortical thickness of the inferior frontal gyrus and insula and speech rate with the precentral gyrus. One in 8 patients presented with moderate speech timing deficits with a further two-thirds rated as mild or subclinical. Subtle but measurable deficits in prosody are common in bvFTD and should be considered during disease management. Language function correlated with speech timing measures derived from the unprepared monologue only. © 2017 American Academy of Neurology.

  2. Learning trajectories for speech motor performance in children with specific language impairment.

    Science.gov (United States)

    Richtsmeier, Peter T; Goffman, Lisa

    2015-01-01

    Children with specific language impairment (SLI) often perform below expected levels, including on tests of motor skill and in learning tasks, particularly procedural learning. In this experiment we examined the possibility that children with SLI might also have a motor learning deficit. Twelve children with SLI and thirteen children with typical development (TD) produced complex nonwords in an imitation task. Productions were collected across three blocks, with the first and second blocks on the same day and the third block one week later. Children's lip movements while producing the nonwords were recorded using an Optotrak camera system. Movements were then analyzed for production duration and stability. Movement analyses indicated that both groups of children produced shorter productions in later blocks (corroborated by an acoustic analysis), and the rate of change was comparable for the TD and SLI groups. A nonsignificant trend for more stable productions was also observed in both groups. SLI is regularly accompanied by a motor deficit, and this study does not dispute that. However, children with SLI learned to make more efficient productions at a rate similar to their peers with TD, revealing some modification of the motor deficit associated with SLI. The reader will learn about deficits commonly associated with specific language impairment (SLI) that often occur alongside the hallmark language deficit. The authors present an experiment showing that children with SLI improved speech motor performance at a similar rate compared to typically developing children. The implication is that speech motor learning is not impaired in children with SLI. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Speech-Based Human and Service Robot Interaction: An Application for Mexican Dysarthric People

    Directory of Open Access Journals (Sweden)

    Santiago Omar Caballero Morales

    2013-01-01

    Full Text Available Dysarthria is a motor speech disorder due to weakness or poor coordination of the speech muscles. This condition can be caused by a stroke, traumatic brain injury, or by a degenerative neurological disease. Commonly, people with this disorder also have muscular dystrophy, which restricts their use of switches or keyboards for communication or control of assistive devices (i.e., an electric wheelchair or a service robot. In this case, speech recognition is an attractive alternative for interaction and control of service robots, despite the difficulty of achieving robust recognition performance. In this paper we present a speech recognition system for human and service robot interaction for Mexican Spanish dysarthric speakers. The core of the system consisted of a Speaker Adaptive (SA recognition system trained with normal-speech. Features such as on-line control of the language model perplexity and the adding of vocabulary, contribute to high recognition performance. Others, such as assessment and text-to-speech (TTS synthesis, contribute to a more complete interaction with a service robot. Live tests were performed with two mild dysarthric speakers, achieving recognition accuracies of 90–95% for spontaneous speech and 95–100% of accomplished simulated service robot tasks.

  4. Research Paper: Investigation of Acoustic Characteristics of Speech Motor Control in Children Who Stutter and Children Who Do Not Stutter

    Directory of Open Access Journals (Sweden)

    Fatemeh Fakar Gharamaleki

    2016-11-01

    Full Text Available Objective Stuttering is a developmental disorder of speech fluency with unknown causes. One of the proposed theories in this field is deficits in speech motor control that is associated with damaged control, timing, and coordination of the speech muscles. Fundamental frequency, fundamental frequency range, intensity, intensity range, and voice onset time are the most important acoustic components that are often used for indirect evaluation of physiological functions underlying the mechanisms of speech motor control. The purpose of this investigation was to compare some of the acoustic characteristics of speech motor control in children who stutter and children who do not stutter. Materials & Methods This research is a descriptive-analytic and cross-sectional comparative study. A total of 25 Azari-Persian bilingual boys who stutter (stutters group and 23 Azari-Persian bilinguals and 21 Persian monolingual boys who do not stutter (non-stutters group in the age range of 6 to 10 years participated in this study. Children participated in /a/ and /i/ vowels prolongation and carrier phrase repetition tasks for the analysis of some of their acoustic characteristics including fundamental frequency, fundamental frequency range, intensity, intensity range, and voice onset time. The PRAAT software was used for acoustic analysis. SPSS software (version 17, one-way ANOVA, and Kruskal-Wallis test were used for analyzing the data. Results The results indicated that there were no significant differences between the stutters and non-stutters groups (P>0.05 with respect to the acoustic features of speech motor control . Conclusion No significant group differences were observed in all of the dependent variables reported in this study. Thus, the results of this research do not support the notion of aberrant speech motor control in children who stutter.

  5. Temporal predictive mechanisms modulate motor reaction time during initiation and inhibition of speech and hand movement.

    Science.gov (United States)

    Johari, Karim; Behroozmand, Roozbeh

    2017-08-01

    Skilled movement is mediated by motor commands executed with extremely fine temporal precision. The question of how the brain incorporates temporal information to perform motor actions has remained unanswered. This study investigated the effect of stimulus temporal predictability on response timing of speech and hand movement. Subjects performed a randomized vowel vocalization or button press task in two counterbalanced blocks in response to temporally-predictable and unpredictable visual cues. Results indicated that speech and hand reaction time was decreased for predictable compared with unpredictable stimuli. This finding suggests that a temporal predictive code is established to capture temporal dynamics of sensory cues in order to produce faster movements in responses to predictable stimuli. In addition, results revealed a main effect of modality, indicating faster hand movement compared with speech. We suggest that this effect is accounted for by the inherent complexity of speech production compared with hand movement. Lastly, we found that movement inhibition was faster than initiation for both hand and speech, suggesting that movement initiation requires a longer processing time to coordinate activities across multiple regions in the brain. These findings provide new insights into the mechanisms of temporal information processing during initiation and inhibition of speech and hand movement. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Progressive Apraxia of Speech as a Sign of Motor Neuron Disease

    Science.gov (United States)

    Duffy, Joseph R.; Peach, Richard K.; Strand, Edythe A.

    2007-01-01

    Purpose: To document and describe in detail the occurrence of apraxia of speech (AOS) in a group of individuals with a diagnosis of motor neuron disease (MND). Method: Seven individuals with MND and AOS were identified from among 80 patients with a variety of neurodegenerative diseases and AOS (J. R. Duffy, 2006). The history, presenting…

  7. A test of speech motor control on word level productions: The SPA Test (Dutch: Screening Pittige Articulatie)

    NARCIS (Netherlands)

    P. Dejonckere; F. Wijnen; Dr. Yvonne van Zaalen

    2009-01-01

    The primary objective of this article is to study whether an assessment instrument specifically designed to assess speech motor control on word level productions would be able to add differential diagnostic speech characteristics between people who clutter and people who stutter. It was hypothesized

  8. Speech networks at rest and in action: interactions between functional brain networks controlling speech production

    Science.gov (United States)

    Fuertinger, Stefan

    2015-01-01

    Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. PMID:25673742

  9. A wireless brain-machine interface for real-time speech synthesis.

    Directory of Open Access Journals (Sweden)

    Frank H Guenther

    2009-12-01

    Full Text Available Brain-machine interfaces (BMIs involving electrodes implanted into the human cerebral cortex have recently been developed in an attempt to restore function to profoundly paralyzed individuals. Current BMIs for restoring communication can provide important capabilities via a typing process, but unfortunately they are only capable of slow communication rates. In the current study we use a novel approach to speech restoration in which we decode continuous auditory parameters for a real-time speech synthesizer from neuronal activity in motor cortex during attempted speech.Neural signals recorded by a Neurotrophic Electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome, characterized by near-total paralysis with spared cognition, were transmitted wirelessly across the scalp and used to drive a speech synthesizer. A Kalman filter-based decoder translated the neural signals generated during attempted speech into continuous parameters for controlling a synthesizer that provided immediate (within 50 ms auditory feedback of the decoded sound. Accuracy of the volunteer's vowel productions with the synthesizer improved quickly with practice, with a 25% improvement in average hit rate (from 45% to 70% and 46% decrease in average endpoint error from the first to the last block of a three-vowel task.Our results support the feasibility of neural prostheses that may have the potential to provide near-conversational synthetic speech output for individuals with severely impaired speech motor control. They also provide an initial glimpse into the functional properties of neurons in speech motor cortical areas.

  10. Motor cortex hand area and speech: implications for the development of language.

    Science.gov (United States)

    Meister, Ingo Gerrit; Boroojerdi, Babak; Foltys, Henrik; Sparing, Roland; Huber, Walter; Töpper, Rudolf

    2003-01-01

    Recently a growing body of evidence has suggested that a functional link exists between the hand motor area of the language dominant hemisphere and the regions subserving language processing. We examined the excitability of the hand motor area and the leg motor area during reading aloud and during non-verbal oral movements using transcranial magnetic stimulation (TMS). During reading aloud, but not before or afterwards, excitability was increased in the hand motor area of the dominant hemisphere. This reading effect was found to be independent of the duration of speech. No such effect could be found in the contralateral hemisphere. The excitability of the leg area of the motor cortex remained unchanged during reading aloud. The excitability during non-verbal oral movements was slightly increased in both hemispheres. Our results are consistent with previous findings and may indicate a specific functional connection between the hand motor area and the cortical language network.

  11. The Tuning of Human Neonates' Preference for Speech

    Science.gov (United States)

    Vouloumanos, Athena; Hauser, Marc D.; Werker, Janet F.; Martin, Alia

    2010-01-01

    Human neonates prefer listening to speech compared to many nonspeech sounds, suggesting that humans are born with a bias for speech. However, neonates' preference may derive from properties of speech that are not unique but instead are shared with the vocalizations of other species. To test this, thirty neonates and sixteen 3-month-olds were…

  12. Hemispheric speech lateralisation in the developing brain is related to motor praxis ability

    Directory of Open Access Journals (Sweden)

    Jessica C. Hodgson

    2016-12-01

    Full Text Available Commonly displayed functional asymmetries such as hand dominance and hemispheric speech lateralisation are well researched in adults. However there is debate about when such functions become lateralised in the typically developing brain. This study examined whether patterns of speech laterality and hand dominance were related and whether they varied with age in typically developing children. 148 children aged 3–10 years performed an electronic pegboard task to determine hand dominance; a subset of 38 of these children also underwent functional Transcranial Doppler (fTCD imaging to derive a lateralisation index (LI for hemispheric activation during speech production using an animation description paradigm. There was no main effect of age in the speech laterality scores, however, younger children showed a greater difference in performance between their hands on the motor task. Furthermore, this between-hand performance difference significantly interacted with direction of speech laterality, with a smaller between-hand difference relating to increased left hemisphere activation. This data shows that both handedness and speech lateralisation appear relatively determined by age 3, but that atypical cerebral lateralisation is linked to greater performance differences in hand skill, irrespective of age. Results are discussed in terms of the common neural systems underpinning handedness and speech lateralisation.

  13. Speech networks at rest and in action: interactions between functional brain networks controlling speech production.

    Science.gov (United States)

    Simonyan, Kristina; Fuertinger, Stefan

    2015-04-01

    Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. Copyright © 2015 the American Physiological Society.

  14. Telephone based speech interfaces in the developing world, from the perspective of human-human communication

    CSIR Research Space (South Africa)

    Naidoo, S

    2005-07-01

    Full Text Available recently, before computers systems were able to synthesize or recognize speech, speech was a capability unique to humans. The human brain has developed to differentiate between human speech and other audio occurrences. Therefore, the slowly- evolving... human brain reacts in certain ways to voice stimuli, and has certain expectations regarding communication by voice. Nass affirms that the human brain operates using the same mechanisms when interacting with speech interfaces as when conversing...

  15. Corollary discharge provides the sensory content of inner speech.

    Science.gov (United States)

    Scott, Mark

    2013-09-01

    Inner speech is one of the most common, but least investigated, mental activities humans perform. It is an internal copy of one's external voice and so is similar to a well-established component of motor control: corollary discharge. Corollary discharge is a prediction of the sound of one's voice generated by the motor system. This prediction is normally used to filter self-caused sounds from perception, which segregates them from externally caused sounds and prevents the sensory confusion that would otherwise result. The similarity between inner speech and corollary discharge motivates the theory, tested here, that corollary discharge provides the sensory content of inner speech. The results reported here show that inner speech attenuates the impact of external sounds. This attenuation was measured using a context effect (an influence of contextual speech sounds on the perception of subsequent speech sounds), which weakens in the presence of speech imagery that matches the context sound. Results from a control experiment demonstrated this weakening in external speech as well. Such sensory attenuation is a hallmark of corollary discharge.

  16. Monkey Lipsmacking Develops Like the Human Speech Rhythm

    Science.gov (United States)

    Morrill, Ryan J.; Paukner, Annika; Ferrari, Pier F.; Ghazanfar, Asif A.

    2012-01-01

    Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved "de novo" in humans. An alternative account--the one we explored here--is that the rhythm of speech evolved through the modification of rhythmic facial…

  17. Nonspeech Oral Motor Treatment Issues Related to Children with Developmental Speech Sound Disorders

    Science.gov (United States)

    Ruscello, Dennis M.

    2008-01-01

    Purpose: This article examines nonspeech oral motor treatments (NSOMTs) in the population of clients with developmental speech sound disorders. NSOMTs are a collection of nonspeech methods and procedures that claim to influence tongue, lip, and jaw resting postures; increase strength; improve muscle tone; facilitate range of motion; and develop…

  18. The impact of threat and cognitive stress on speech motor control in people who stutter.

    Science.gov (United States)

    Lieshout, Pascal van; Ben-David, Boaz; Lipski, Melinda; Namasivayam, Aravind

    2014-06-01

    In the present study, an Emotional Stroop and Classical Stroop task were used to separate the effect of threat content and cognitive stress from the phonetic features of words on motor preparation and execution processes. A group of 10 people who stutter (PWS) and 10 matched people who do not stutter (PNS) repeated colour names for threat content words and neutral words, as well as for traditional Stroop stimuli. Data collection included speech acoustics and movement data from upper lip and lower lip using 3D EMA. PWS in both tasks were slower to respond and showed smaller upper lip movement ranges than PNS. For the Emotional Stroop task only, PWS were found to show larger inter-lip phase differences compared to PNS. General threat words were executed with faster lower lip movements (larger range and shorter duration) in both groups, but only PWS showed a change in upper lip movements. For stutter specific threat words, both groups showed a more variable lip coordination pattern, but only PWS showed a delay in reaction time compared to neutral words. Individual stuttered words showed no effects. Both groups showed a classical Stroop interference effect in reaction time but no changes in motor variables. This study shows differential motor responses in PWS compared to controls for specific threat words. Cognitive stress was not found to affect stuttering individuals differently than controls or that its impact spreads to motor execution processes. After reading this article, the reader will be able to: (1) discuss the importance of understanding how threat content influences speech motor control in people who stutter and non-stuttering speakers; (2) discuss the need to use tasks like the Emotional Stroop and Regular Stroop to separate phonetic (word-bound) based impact on fluency from other factors in people who stutter; and (3) describe the role of anxiety and cognitive stress on speech motor processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Selective Attention Enhances Beta-Band Cortical Oscillation to Speech under "Cocktail-Party" Listening Conditions.

    Science.gov (United States)

    Gao, Yayue; Wang, Qian; Ding, Yu; Wang, Changming; Li, Haifeng; Wu, Xihong; Qu, Tianshu; Li, Liang

    2017-01-01

    Human listeners are able to selectively attend to target speech in a noisy environment with multiple-people talking. Using recordings of scalp electroencephalogram (EEG), this study investigated how selective attention facilitates the cortical representation of target speech under a simulated "cocktail-party" listening condition with speech-on-speech masking. The result shows that the cortical representation of target-speech signals under the multiple-people talking condition was specifically improved by selective attention relative to the non-selective-attention listening condition, and the beta-band activity was most strongly modulated by selective attention. Moreover, measured with the Granger Causality value, selective attention to the single target speech in the mixed-speech complex enhanced the following four causal connectivities for the beta-band oscillation: the ones (1) from site FT7 to the right motor area, (2) from the left frontal area to the right motor area, (3) from the central frontal area to the right motor area, and (4) from the central frontal area to the right frontal area. However, the selective-attention-induced change in beta-band causal connectivity from the central frontal area to the right motor area, but not other beta-band causal connectivities, was significantly correlated with the selective-attention-induced change in the cortical beta-band representation of target speech. These findings suggest that under the "cocktail-party" listening condition, the beta-band oscillation in EEGs to target speech is specifically facilitated by selective attention to the target speech that is embedded in the mixed-speech complex. The selective attention-induced unmasking of target speech may be associated with the improved beta-band functional connectivity from the central frontal area to the right motor area, suggesting a top-down attentional modulation of the speech-motor process.

  20. Speech Motor Sequence Learning: Acquisition and Retention in Parkinson Disease and Normal Aging

    Science.gov (United States)

    Whitfield, Jason A.; Goberman, Alexander M.

    2017-01-01

    Purpose: The aim of the current investigation was to examine speech motor sequence learning in neurologically healthy younger adults, neurologically healthy older adults, and individuals with Parkinson disease (PD) over a 2-day period. Method: A sequential nonword repetition task was used to examine learning over 2 days. Participants practiced a…

  1. Using the Self-Select Paradigm to Delineate the Nature of Speech Motor Programming

    Science.gov (United States)

    Wright, David L.; Robin, Don A.; Rhee, Jooyhun; Vaculin, Amber; Jacks, Adam; Guenther, Frank H.; Fox, Peter T.

    2009-01-01

    Purpose: The authors examined the involvement of 2 speech motor programming processes identified by S. T. Klapp (1995, 2003) during the articulation of utterances differing in syllable and sequence complexity. According to S. T. Klapp, 1 process, INT, resolves the demands of the programmed unit, whereas a second process, SEQ, oversees the serial…

  2. Selective Attention Enhances Beta-Band Cortical Oscillation to Speech under “Cocktail-Party” Listening Conditions

    Science.gov (United States)

    Gao, Yayue; Wang, Qian; Ding, Yu; Wang, Changming; Li, Haifeng; Wu, Xihong; Qu, Tianshu; Li, Liang

    2017-01-01

    Human listeners are able to selectively attend to target speech in a noisy environment with multiple-people talking. Using recordings of scalp electroencephalogram (EEG), this study investigated how selective attention facilitates the cortical representation of target speech under a simulated “cocktail-party” listening condition with speech-on-speech masking. The result shows that the cortical representation of target-speech signals under the multiple-people talking condition was specifically improved by selective attention relative to the non-selective-attention listening condition, and the beta-band activity was most strongly modulated by selective attention. Moreover, measured with the Granger Causality value, selective attention to the single target speech in the mixed-speech complex enhanced the following four causal connectivities for the beta-band oscillation: the ones (1) from site FT7 to the right motor area, (2) from the left frontal area to the right motor area, (3) from the central frontal area to the right motor area, and (4) from the central frontal area to the right frontal area. However, the selective-attention-induced change in beta-band causal connectivity from the central frontal area to the right motor area, but not other beta-band causal connectivities, was significantly correlated with the selective-attention-induced change in the cortical beta-band representation of target speech. These findings suggest that under the “cocktail-party” listening condition, the beta-band oscillation in EEGs to target speech is specifically facilitated by selective attention to the target speech that is embedded in the mixed-speech complex. The selective attention-induced unmasking of target speech may be associated with the improved beta-band functional connectivity from the central frontal area to the right motor area, suggesting a top-down attentional modulation of the speech-motor process. PMID:28239344

  3. Characterizing Intonation Deficit in Motor Speech Disorders: An Autosegmental-Metrical Analysis of Spontaneous Speech in Hypokinetic Dysarthria, Ataxic Dysarthria, and Foreign Accent Syndrome

    Science.gov (United States)

    Lowit, Anja; Kuschmann, Anja

    2012-01-01

    Purpose: The autosegmental-metrical (AM) framework represents an established methodology for intonational analysis in unimpaired speaker populations but has found little application in describing intonation in motor speech disorders (MSDs). This study compared the intonation patterns of unimpaired participants (CON) and those with Parkinson's…

  4. Influence of Language Load on Speech Motor Skill in Children with Specific Language Impairment

    Science.gov (United States)

    Saletta, Meredith; Goffman, Lisa; Ward, Caitlin; Oleson, Jacob

    2018-01-01

    Purpose: Children with specific language impairment (SLI) show particular deficits in the generation of sequenced action--the quintessential procedural task. Practiced imitation of a sequence may become rote and require reduced procedural memory. This study explored whether speech motor deficits in children with SLI occur generally or only in…

  5. A multigenerational family study of oral and hand motor sequencing ability provides evidence for a familial speech sound disorder subtype

    Science.gov (United States)

    Peter, Beate; Raskind, Wendy H.

    2011-01-01

    Purpose To evaluate phenotypic expressions of speech sound disorder (SSD) in multigenerational families with evidence of familial forms of SSD. Method Members of five multigenerational families (N = 36) produced rapid sequences of monosyllables and disyllables and tapped computer keys with repetitive and alternating movements. Results Measures of repetitive and alternating motor speed were correlated within and between the two motor systems. Repetitive and alternating motor speeds increased in children and decreased in adults as a function of age. In two families with children who had severe speech deficits consistent with disrupted praxis, slowed alternating, but not repetitive, oral movements characterized most of the affected children and adults with a history of SSD, and slowed alternating hand movements were seen in some of the biologically related participants as well. Conclusion Results are consistent with a familial motor-based SSD subtype with incomplete penetrance, motivating new clinical questions about motor-based intervention not only in the oral but also the limb system. PMID:21909176

  6. Extensions to the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…

  7. Vocal effort modulates the motor planning of short speech structures

    Science.gov (United States)

    Taitz, Alan; Shalom, Diego E.; Trevisan, Marcos A.

    2018-05-01

    Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.

  8. Five Decades of Research in Speech Motor Control: What Have We Learned, and Where Should We Go from Here?

    Science.gov (United States)

    Perkell, Joseph S.

    2013-01-01

    Purpose: The author presents a view of research in speech motor control over the past 5 decades, as observed from within Ken Stevens's Speech Communication Group (SCG) in the Research Laboratory of Electronics at MIT. Method: The author presents a limited overview of some important developments and discoveries. The perspective is based…

  9. The Effects of Divided Attention on Speech Motor, Verbal Fluency, and Manual Task Performance

    Science.gov (United States)

    Dromey, Christopher; Shim, Erin

    2008-01-01

    Purpose: The goal of this study was to evaluate aspects of the "functional distance hypothesis," which predicts that tasks regulated by brain networks in closer anatomic proximity will interfere more with each other than tasks controlled by spatially distant regions. Speech, verbal fluency, and manual motor tasks were examined to ascertain whether…

  10. A Review of Standardized Tests of Nonverbal Oral and Speech Motor Performance in Children

    Science.gov (United States)

    McCauley, Rebecca J.; Strand, Edythe A.

    2008-01-01

    Purpose: To review the content and psychometric characteristics of 6 published tests currently available to aid in the study, diagnosis, and treatment of motor speech disorders in children. Method: We compared the content of the 6 tests and critically evaluated the degree to which important psychometric characteristics support the tests' use for…

  11. What happens to the motor theory of perception when the motor system is damaged?

    Science.gov (United States)

    Stasenko, Alena; Garcea, Frank E; Mahon, Bradford Z

    2013-09-01

    Motor theories of perception posit that motor information is necessary for successful recognition of actions. Perhaps the most well known of this class of proposals is the motor theory of speech perception, which argues that speech recognition is fundamentally a process of identifying the articulatory gestures (i.e. motor representations) that were used to produce the speech signal. Here we review neuropsychological evidence from patients with damage to the motor system, in the context of motor theories of perception applied to both manual actions and speech. Motor theories of perception predict that patients with motor impairments will have impairments for action recognition. Contrary to that prediction, the available neuropsychological evidence indicates that recognition can be spared despite profound impairments to production. These data falsify strong forms of the motor theory of perception, and frame new questions about the dynamical interactions that govern how information is exchanged between input and output systems.

  12. Human phoneme recognition depending on speech-intrinsic variability.

    Science.gov (United States)

    Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger

    2010-11-01

    The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).

  13. Neuronal basis of speech comprehension.

    Science.gov (United States)

    Specht, Karsten

    2014-01-01

    Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Developmental apraxia of speech in children. Quantitive assessment of speech characteristics

    NARCIS (Netherlands)

    Thoonen, G.H.J.

    1998-01-01

    Developmental apraxia of speech (DAS) in children is a speech disorder, supposed to have a neurological origin, which is commonly considered to result from particular deficits in speech processing (i.e., phonological planning, motor programming). However, the label DAS has often been used as

  15. Using the Electrocorticographic Speech Network to Control a Brain-Computer Interface in Humans

    Science.gov (United States)

    Leuthardt, Eric C.; Gaona, Charles; Sharma, Mohit; Szrama, Nicholas; Roland, Jarod; Freudenberg, Zac; Solis, Jamie; Breshears, Jonathan; Schalk, Gerwin

    2013-01-01

    Electrocorticography (ECoG) has emerged as a new signal platform for brain-computer interface (BCI) systems. Classically, the cortical physiology that has been commonly investigated and utilized for device control in humans has been brain signals from sensorimotor cortex. Hence, it was unknown whether other neurophysiological substrates, such as the speech network, could be used to further improve on or complement existing motor-based control paradigms. We demonstrate here for the first time that ECoG signals associated with different overt and imagined phoneme articulation can enable invasively monitored human patients to control a one-dimensional computer cursor rapidly and accurately. This phonetic content was distinguishable within higher gamma frequency oscillations and enabled users to achieve final target accuracies between 68 and 91% within 15 minutes. Additionally, one of the patients achieved robust control using recordings from a microarray consisting of 1 mm spaced microwires. These findings suggest that the cortical network associated with speech could provide an additional cognitive and physiologic substrate for BCI operation and that these signals can be acquired from a cortical array that is small and minimally invasive. PMID:21471638

  16. Motor Proficiency of 6- to 9-Year-Old Children with Speech and Language Problems

    Science.gov (United States)

    Visscher, Chris; Houwen, Suzanne; Moolenaar, Ben; Lyons, Jim; Scherder, Erik J. A.; Hartman, Esther

    2010-01-01

    Aim: This study compared the gross motor skills of school-age children (mean age 7y 8mo, range 6-9y) with developmental speech and language disorders (DSLDs; n = 105; 76 males, 29 females) and typically developing children (n = 105; 76 males, 29 females). The relationship between the performance parameters and the children's age was investigated…

  17. The equilibrium point hypothesis and its application to speech motor control.

    Science.gov (United States)

    Perrier, P; Ostry, D J; Laboissière, R

    1996-04-01

    In this paper, we address a number of issues in speech research in the context of the equilibrium point hypothesis of motor control. The hypothesis suggests that movements arise from shifts in the equilibrium position of the limb or the speech articulator. The equilibrium is a consequence of the interaction of central neural commands, reflex mechanisms, muscle properties, and external loads, but it is under the control of central neural commands. These commands act to shift the equilibrium via centrally specified signals acting at the level of the motoneurone (MN) pool. In the context of a model of sagittal plane jaw and hyoid motion based on the lambda version of the equilibrium point hypothesis, we consider the implications of this hypothesis for the notion of articulatory targets. We suggest that simple linear control signals may underlie smooth articulatory trajectories. We explore as well the phenomenon of intraarticulator coarticulation in jaw movement. We suggest that even when no account is taken of upcoming context, that apparent anticipatory changes in movement amplitude and duration may arise due to dynamics. We also present a number of simulations that show in different ways how variability in measured kinematics can arise in spite of constant magnitude speech control signals.

  18. Speech and nonspeech: What are we talking about?

    Science.gov (United States)

    Maas, Edwin

    2017-08-01

    Understanding of the behavioural, cognitive and neural underpinnings of speech production is of interest theoretically, and is important for understanding disorders of speech production and how to assess and treat such disorders in the clinic. This paper addresses two claims about the neuromotor control of speech production: (1) speech is subserved by a distinct, specialised motor control system and (2) speech is holistic and cannot be decomposed into smaller primitives. Both claims have gained traction in recent literature, and are central to a task-dependent model of speech motor control. The purpose of this paper is to stimulate thinking about speech production, its disorders and the clinical implications of these claims. The paper poses several conceptual and empirical challenges for these claims - including the critical importance of defining speech. The emerging conclusion is that a task-dependent model is called into question as its two central claims are founded on ill-defined and inconsistently applied concepts. The paper concludes with discussion of methodological and clinical implications, including the potential utility of diadochokinetic (DDK) tasks in assessment of motor speech disorders and the contraindication of nonspeech oral motor exercises to improve speech function.

  19. Top-Down Modulation of Auditory-Motor Integration during Speech Production: The Role of Working Memory.

    Science.gov (United States)

    Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun

    2017-10-25

    Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study

  20. Utility of TMS to understand the neurobiology of speech

    Directory of Open Access Journals (Sweden)

    Takenobu eMurakami

    2013-07-01

    Full Text Available According to a traditional view, speech perception and production are processed largely separately in sensory and motor brain areas. Recent psycholinguistic and neuroimaging studies provide novel evidence that the sensory and motor systems dynamically interact in speech processing, by demonstrating that speech perception and imitation share regional brain activations. However, the exact nature and mechanisms of these sensorimotor interactions are not completely understood yet.Transcranial magnetic stimulation (TMS has often been used in the cognitive neurosciences, including speech research, as a complementary technique to behavioral and neuroimaging studies. Here we provide an up-to-date review focusing on TMS studies that explored speech perception and imitation.Single-pulse TMS of the primary motor cortex (M1 demonstrated a speech specific and somatotopically specific increase of excitability of the M1 lip area during speech perception (listening to speech or lip reading. A paired-coil TMS approach showed increases in effective connectivity from brain regions that are involved in speech processing to the M1 lip area when listening to speech. TMS in virtual lesion mode applied to speech processing areas modulated performance of phonological recognition and imitation of perceived speech.In summary, TMS is an innovative tool to investigate processing of speech perception and imitation. TMS studies have provided strong evidence that the sensory system is critically involved in mapping sensory input onto motor output and that the motor system plays an important role in speech perception.

  1. Vowel production, speech-motor control, and phonological encoding in people who are lesbian, bisexual, or gay, and people who are not

    Science.gov (United States)

    Munson, Benjamin; Deboe, Nancy

    2003-10-01

    A recent study (Pierrehumbert, Bent, Munson, and Bailey, submitted) found differences in vowel production between people who are lesbian, bisexual, or gay (LBG) and people who are not. The specific differences (more fronted /u/ and /a/ in the non-LB women; an overall more-contracted vowel space in the non-gay men) were not amenable to an interpretation based on simple group differences in vocal-tract geometry. Rather, they suggested that differences were either due to group differences in some other skill, such as motor control or phonological encoding, or learned. This paper expands on this research by examining vowel production, speech-motor control (measured by diadochokinetic rates), and phonological encoding (measured by error rates in a tongue-twister task) in people who are LBG and people who are not. Analyses focus on whether the findings of Pierrehumbert et al. (submitted) are replicable, and whether group differences in vowel production are related to group differences in speech-motor control or phonological encoding. To date, 20 LB women, 20 non-LB women, 7 gay men, and 7 non-gay men have participated. Preliminary analyses suggest that there are no group differences in speech motor control or phonological encoding, suggesting that the earlier findings of Pierrehumbert et al. reflected learned behaviors.

  2. Human Superior Temporal Gyrus Organization of Spectrotemporal Modulation Tuning Derived from Speech Stimuli.

    Science.gov (United States)

    Hullett, Patrick W; Hamilton, Liberty S; Mesgarani, Nima; Schreiner, Christoph E; Chang, Edward F

    2016-02-10

    The human superior temporal gyrus (STG) is critical for speech perception, yet the organization of spectrotemporal processing of speech within the STG is not well understood. Here, to characterize the spatial organization of spectrotemporal processing of speech across human STG, we use high-density cortical surface field potential recordings while participants listened to natural continuous speech. While synthetic broad-band stimuli did not yield sustained activation of the STG, spectrotemporal receptive fields could be reconstructed from vigorous responses to speech stimuli. We find that the human STG displays a robust anterior-posterior spatial distribution of spectrotemporal tuning in which the posterior STG is tuned for temporally fast varying speech sounds that have relatively constant energy across the frequency axis (low spectral modulation) while the anterior STG is tuned for temporally slow varying speech sounds that have a high degree of spectral variation across the frequency axis (high spectral modulation). This work illustrates organization of spectrotemporal processing in the human STG, and illuminates processing of ethologically relevant speech signals in a region of the brain specialized for speech perception. Considerable evidence has implicated the human superior temporal gyrus (STG) in speech processing. However, the gross organization of spectrotemporal processing of speech within the STG is not well characterized. Here we use natural speech stimuli and advanced receptive field characterization methods to show that spectrotemporal features within speech are well organized along the posterior-to-anterior axis of the human STG. These findings demonstrate robust functional organization based on spectrotemporal modulation content, and illustrate that much of the encoded information in the STG represents the physical acoustic properties of speech stimuli. Copyright © 2016 the authors 0270-6474/16/362014-13$15.00/0.

  3. Effects of human fatigue on speech signals

    Science.gov (United States)

    Stamoulis, Catherine

    2004-05-01

    Cognitive performance may be significantly affected by fatigue. In the case of critical personnel, such as pilots, monitoring human fatigue is essential to ensure safety and success of a given operation. One of the modalities that may be used for this purpose is speech, which is sensitive to respiratory changes and increased muscle tension of vocal cords, induced by fatigue. Age, gender, vocal tract length, physical and emotional state may significantly alter speech intensity, duration, rhythm, and spectral characteristics. In addition to changes in speech rhythm, fatigue may also affect the quality of speech, such as articulation. In a noisy environment, detecting fatigue-related changes in speech signals, particularly subtle changes at the onset of fatigue, may be difficult. Therefore, in a performance-monitoring system, speech parameters which are significantly affected by fatigue need to be identified and extracted from input signals. For this purpose, a series of experiments was performed under slowly varying cognitive load conditions and at different times of the day. The results of the data analysis are presented here.

  4. FREEDOM OF SPEECH IN INDONESIAN PRESS: INTERNATIONAL HUMAN RIGHTS PERSPECTIVE

    OpenAIRE

    Clara Staples

    2016-01-01

    This paper will firstly examine the International framework of human rights law and its guidelines for safeguarding the right to freedom of speech in the press. Secondly, it will describe the constitutional and other legal rights protecting freedom of speech in Indonesia and assess their compatibility with the right to freedom of speech under the International human rights law framework. Thirdly it will consider the impact of Indonesia's constitutional law and criminal and civil law, includin...

  5. Speech freedom and press freedom in human security in Rwanda

    OpenAIRE

    Niyonzima, Oswald

    2014-01-01

    Treball Final de Màster Universitari Internacional en Estudis de Pau, Conflictes i Desenvolupament. Codi: SAA074. Curs: 2013/2014 Freedom of speech and press freedom are key foundations of all human rights as stipulated in human rights declaration of 1948. Denying people the right to free speech is keeping them away from what is happening in this world, thus, hindering them from participating in decision making. While speech freedom and press freedom are key tools to measure if a country ...

  6. Reflections on mirror neurons and speech perception

    Science.gov (United States)

    Lotto, Andrew J.; Hickok, Gregory S.; Holt, Lori L.

    2010-01-01

    The discovery of mirror neurons, a class of neurons that respond when a monkey performs an action and also when the monkey observes others producing the same action, has promoted a renaissance for the Motor Theory (MT) of speech perception. This is because mirror neurons seem to accomplish the same kind of one to one mapping between perception and action that MT theorizes to be the basis of human speech communication. However, this seeming correspondence is superficial, and there are theoretical and empirical reasons to temper enthusiasm about the explanatory role mirror neurons might have for speech perception. In fact, rather than providing support for MT, mirror neurons are actually inconsistent with the central tenets of MT. PMID:19223222

  7. Human spinal motor control

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo

    2016-01-01

    Human studies in the past three decades have provided us with an emerging understanding of how cortical and spinal networks collaborate to ensure the vast repertoire of human behaviors. We differ from other animals in having direct cortical connections to spinal motoneurons, which bypass spinal...... the central motor command by opening or closing sensory feedback pathways. In the future, human studies of spinal motor control, in close collaboration with animal studies on the molecular biology of the spinal cord, will continue to document the neural basis for human behavior. Expected final online...

  8. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  9. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  10. Physiological Indices of Bilingualism: Oral–Motor Coordination and Speech Rate in Bengali–English Speakers

    Science.gov (United States)

    Chakraborty, Rahul; Goffman, Lisa; Smith, Anne

    2009-01-01

    Purpose To examine how age of immersion and proficiency in a 2nd language influence speech movement variability and speaking rate in both a 1st language and a 2nd language. Method A group of 21 Bengali–English bilingual speakers participated. Lip and jaw movements were recorded. For all 21 speakers, lip movement variability was assessed based on productions of Bengali (L1; 1st language) and English (L2; 2nd language) sentences. For analyses related to the influence of L2 proficiency on speech production processes, participants were sorted into low- (n = 7) and high-proficiency (n = 7) groups. Lip movement variability and speech rate were evaluated for both of these groups across L1 and L2 sentences. Results Surprisingly, adult bilingual speakers produced equally consistent speech movement patterns in their production of L1 and L2. When groups were sorted according to proficiency, highly proficient speakers were marginally more variable in their L1. In addition, there were some phoneme-specific effects, most markedly that segments not shared by both languages were treated differently in production. Consistent with previous studies, movement durations were longer for less proficient speakers in both L1 and L2. Interpretation In contrast to those of child learners, the speech motor systems of adult L2 speakers show a high degree of consistency. Such lack of variability presumably contributes to protracted difficulties with acquiring nativelike pronunciation in L2. The proficiency results suggest bidirectional interactions across L1 and L2, which is consistent with hypotheses regarding interference and the sharing of phonological space. A slower speech rate in less proficient speakers implies that there are increased task demands on speech production processes. PMID:18367680

  11. Convergent transcriptional specializations in the brains of humans and song-learning birds

    DEFF Research Database (Denmark)

    Pfenning, Andreas R.; Hara, Erina; Whitney, Osceola

    2014-01-01

    Song-learning birds and humans share independently evolved similarities in brain pathways for vocal learning that are essential for song and speech and are not found in most other species. Comparisons of brain transcriptomes of song-learning birds and humans relative to vocal nonlearners identified...... convergent gene expression specializations in specific song and speech brain regions of avian vocal learners and humans. The strongest shared profiles relate bird motor and striatal song-learning nuclei, respectively, with human laryngeal motor cortex and parts of the striatum that control speech production...... and learning. Most of the associated genes function in motor control and brain connectivity. Thus, convergent behavior and neural connectivity for a complex trait are associated with convergent specialized expression of multiple genes....

  12. Partially overlapping sensorimotor networks underlie speech praxis and verbal short-term memory: evidence from apraxia of speech following acute stroke.

    Science.gov (United States)

    Hickok, Gregory; Rogalsky, Corianne; Chen, Rong; Herskovits, Edward H; Townsley, Sarah; Hillis, Argye E

    2014-01-01

    We tested the hypothesis that motor planning and programming of speech articulation and verbal short-term memory (vSTM) depend on partially overlapping networks of neural regions. We evaluated this proposal by testing 76 individuals with acute ischemic stroke for impairment in motor planning of speech articulation (apraxia of speech, AOS) and vSTM in the first day of stroke, before the opportunity for recovery or reorganization of structure-function relationships. We also evaluated areas of both infarct and low blood flow that might have contributed to AOS or impaired vSTM in each person. We found that AOS was associated with tissue dysfunction in motor-related areas (posterior primary motor cortex, pars opercularis; premotor cortex, insula) and sensory-related areas (primary somatosensory cortex, secondary somatosensory cortex, parietal operculum/auditory cortex); while impaired vSTM was associated with primarily motor-related areas (pars opercularis and pars triangularis, premotor cortex, and primary motor cortex). These results are consistent with the hypothesis, also supported by functional imaging data, that both speech praxis and vSTM rely on partially overlapping networks of brain regions.

  13. A Network Model of Observation and Imitation of Speech

    Science.gov (United States)

    Mashal, Nira; Solodkin, Ana; Dick, Anthony Steven; Chen, E. Elinor; Small, Steven L.

    2012-01-01

    Much evidence has now accumulated demonstrating and quantifying the extent of shared regional brain activation for observation and execution of speech. However, the nature of the actual networks that implement these functions, i.e., both the brain regions and the connections among them, and the similarities and differences across these networks has not been elucidated. The current study aims to characterize formally a network for observation and imitation of syllables in the healthy adult brain and to compare their structure and effective connectivity. Eleven healthy participants observed or imitated audiovisual syllables spoken by a human actor. We constructed four structural equation models to characterize the networks for observation and imitation in each of the two hemispheres. Our results show that the network models for observation and imitation comprise the same essential structure but differ in important ways from each other (in both hemispheres) based on connectivity. In particular, our results show that the connections from posterior superior temporal gyrus and sulcus to ventral premotor, ventral premotor to dorsal premotor, and dorsal premotor to primary motor cortex in the left hemisphere are stronger during imitation than during observation. The first two connections are implicated in a putative dorsal stream of speech perception, thought to involve translating auditory speech signals into motor representations. Thus, the current results suggest that flow of information during imitation, starting at the posterior superior temporal cortex and ending in the motor cortex, enhances input to the motor cortex in the service of speech execution. PMID:22470360

  14. FREEDOM OF SPEECH IN INDONESIAN PRESS: INTERNATIONAL HUMAN RIGHTS PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Clara Staples

    2016-06-01

    Full Text Available This paper will firstly examine the international framework of human rights law and its guidelines for safeguarding the right to freedom of speech in the press. Secondly, it will describe the constitutional and other legal rights protecting freedom of speech in Indonesia and assess their compatibility with the right to freedom of speech under the international human rights law framework. Thirdly it will consider the impact of Indonesia’s constitutional law and criminal and civil law, including sedition and defamation laws, and finally media ownership, on the interpretation and scope of the right to freedom of speech in the press. Consideration of these laws will be integrated with a discussion of judicial processes. This discussion will be used to determine how and in what circumstances the constitutional right to freedom of speech in the press may be facilitated or enabled, or on the other hand, limited, overridden or curtailed in Indonesia. Conclusions will then be drawn regarding the strengths and weaknesses of Indonesian laws in safeguarding the right to freedom of speech in the press and the democratic implications from an international human rights perspective. This inquiry will be restricted to Indonesian laws in existence during the post-New Order period of 1998 to the present, and to the information and analysis provided by English-language sources.

  15. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  16. Motor and sensory alalia: diagnostic difficulties

    Directory of Open Access Journals (Sweden)

    M. Yu. Bobylova

    2017-01-01

    Full Text Available Alalia is a speech disorder that develops due to organic brain damage in children with normal hearing and intelligence during the first three year of life. Systemic speech underdevelopment in alalia is characterized by violations in the phonetic, phonemic, lexical, and grammatical structure. Patients with alalia can also have non-speech related impairments, including motor (impaired movement and coordination, sensory (impaired sensitivity and perception, and psychopathological disorders. There are three types of alalia: motor, sensory, and mixed. Children with motor alalia have expressive language disorders, speech praxis, poor speech fluency, impaired articulation, and other focal neurological symptoms; however, they understand speech directed to them. Patients with motor alalia are often left-handed. Regional slowing and epileptiform activity are often detected on their electroencephalogram.  Children with sensory alalia are characterized by poor speech understanding (despite normal hearing resulting in secondary underdevelopment of their own speech. These patients have problems with the analysis of sounds, including speech sounds (impaired speech gnosis, which prevents the development of association between the sound image and the object. Therefore, the child hears, but does not understand the speech directed at him/her (auditory agnosia. Differential diagnosis of alalia is challenging and may require several months of observation. It also implies the exclusion of hearing loss and mental disorders.

  17. Speed-Accuracy Tradeoffs in Speech Production

    Science.gov (United States)

    2017-06-01

    capacity of discrete motor responses under different cognitive sets. Journal of Experimental Psychology , 71 (4), 475. SPEED-ACCURACY TRADEOFFS IN HUMAN...space defined by vocal tract constriction degree and location, as in Articulatory Phonology Browman & Goldstein (1992). These high-level spaces are...relationship between speech gestures varies as a function of their positions within the syllable Browman & Goldstein (1995); Krakow (1999); Byrd et al

  18. Facilitation of speech repetition accuracy by theta burst stimulation of the left posterior inferior frontal gyrus.

    Science.gov (United States)

    Restle, Julia; Murakami, Takenobu; Ziemann, Ulf

    2012-07-01

    The posterior part of the inferior frontal gyrus (pIFG) in the left hemisphere is thought to form part of the putative human mirror neuron system and is assigned a key role in mapping sensory perception onto motor action. Accordingly, the pIFG is involved in motor imitation of the observed actions of others but it is not known to what extent speech repetition of auditory-presented sentences is also a function of the pIFG. Here we applied fMRI-guided facilitating intermittent theta burst transcranial magnetic stimulation (iTBS), or depressant continuous TBS (cTBS), or intermediate TBS (imTBS) over the left pIFG of healthy subjects and compared speech repetition accuracy of foreign Japanese sentences before and after TBS. We found that repetition accuracy improved after iTBS and, to a lesser extent, after imTBS, but remained unchanged after cTBS. In a control experiment, iTBS was applied over the left middle occipital gyrus (MOG), a region not involved in sensorimotor processing of auditory-presented speech. Repetition accuracy remained unchanged after iTBS of MOG. We argue that the stimulation type and stimulation site specific facilitating effect of iTBS over left pIFG on speech repetition accuracy indicates a causal role of the human left-hemispheric pIFG in the translation of phonological perception to motor articulatory output for repetition of speech. This effect may prove useful in rehabilitation strategies that combine repetitive speech training with iTBS of the left pIFG in speech disorders, such as aphasia after cerebral stroke. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    Science.gov (United States)

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  20. The effect of deep brain stimulation on the speech motor system.

    Science.gov (United States)

    Mücke, Doris; Becker, Johannes; Barbe, Michael T; Meister, Ingo; Liebhart, Lena; Roettger, Timo B; Dembek, Till; Timmermann, Lars; Grice, Martine

    2014-08-01

    Chronic deep brain stimulation of the nucleus ventralis intermedius is an effective treatment for individuals with medication-resistant essential tremor. However, these individuals report that stimulation has a deleterious effect on their speech. The present study investigates one important factor leading to these effects: the coordination of oral and glottal articulation. Sixteen native-speaking German adults with essential tremor, between 26 and 86 years old, with and without chronic deep brain stimulation of the nucleus ventralis intermedius and 12 healthy, age-matched subjects were recorded performing a fast syllable repetition task (/papapa/, /tatata/, /kakaka/). Syllable duration and voicing-to-syllable ratio as well as parameters related directly to consonant production, voicing during constriction, and frication during constriction were measured. Voicing during constriction was greater in subjects with essential tremor than in controls, indicating a perseveration of voicing into the voiceless consonant. Stimulation led to fewer voiceless intervals (voicing-to-syllable ratio), indicating a reduced degree of glottal abduction during the entire syllable cycle. Stimulation also induced incomplete oral closures (frication during constriction), indicating imprecise oral articulation. The detrimental effect of stimulation on the speech motor system can be quantified using acoustic measures at the subsyllabic level.

  1. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  2. Partially Overlapping Sensorimotor Networks Underlie Speech Praxis and Verbal Short-Term Memory: Evidence from Apraxia of Speech Following Acute Stroke

    Directory of Open Access Journals (Sweden)

    Gregory eHickok

    2014-08-01

    Full Text Available We tested the hypothesis that motor planning and programming of speech articulation and verbal short-term memory (vSTM depend on partially overlapping networks of neural regions. We evaluated this proposal by testing 76 individuals with acute ischemic stroke for impairment in motor planning of speech articulation (apraxia of speech; AOS and vSTM in the first day of stroke, before the opportunity for recovery or reorganization of structure-function relationships. We also evaluate areas of both infarct and low blood flow that might have contributed to AOS or impaired vSTM in each person. We found that AOS was associated with tissue dysfunction in motor-related areas (posterior primary motor cortex, pars opercularis; premotor cortex, insula and sensory-related areas (primary somatosensory cortex, secondary somatosensory cortex, parietal operculum/auditory cortex; while impaired vSTM was associated with primarily motor-related areas (pars opercularis and pars triangularis, premotor cortex, and primary motor cortex. These results are consistent with the hypothesis, also supported by functional imaging data, that both speech praxis and vSTM rely on partially overlapping networks of brain regions.

  3. A Lag in Speech Motor Coordination during Sentence Production Is Associated with Stuttering Persistence in Young Children

    Science.gov (United States)

    Usler, Evan; Smith, Anne; Weber, Christine

    2017-01-01

    Purpose: The purpose of this study was to determine if indices of speech motor coordination during the production of sentences varying in sentence length and syntactic complexity were associated with stuttering persistence versus recovery in 5- to 7-year-old children. Methods: We compared children with persistent stuttering (CWS-Per) with children…

  4. Motor skills, haptic perception and social abilities in children with mild speech disorders.

    Science.gov (United States)

    Müürsepp, Iti; Aibast, Herje; Gapeyeva, Helena; Pääsuke, Mati

    2012-02-01

    The aim of the study was to evaluate motor skills, haptic object recognition and social interaction in 5-year-old children with mild specific expressive language impairment (expressive-SLI) and articulation disorder (AD) in comparison of age- and gender matched healthy children. Twenty nine children (23 boys and 6 girls) with expressive-SLI, 27 children (20 boys and 7 girls) with AD and 30 children (23 boys and 7 girls) with typically developing language as controls participated in our study. The children were examined for manual dexterity, ball skills, static and dynamic balance by M-ABC test, haptic object recognition and for social interaction by questionnaire completed by teachers. Children with mild expressive-SLI demonstrated significantly poorer results in all subtests of motor skills (psocial interaction (p0.05) in measured parameters between children with AD and controls. Children with expressive-SLI performed considerably poorer compared to AD group in balance subtest (psocial interaction are considerably more affected than in children with AD. Although motor difficulties in speech production are prevalent in AD, it is localised and does not involve children's general motor skills, haptic perception or social interaction. Copyright © 2011 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  5. Non-invasive mapping of bilateral motor speech areas using navigated transcranial magnetic stimulation and functional magnetic resonance imaging.

    Science.gov (United States)

    Könönen, Mervi; Tamsi, Niko; Säisänen, Laura; Kemppainen, Samuli; Määttä, Sara; Julkunen, Petro; Jutila, Leena; Äikiä, Marja; Kälviäinen, Reetta; Niskanen, Eini; Vanninen, Ritva; Karjalainen, Pasi; Mervaala, Esa

    2015-06-15

    Navigated transcranial magnetic stimulation (nTMS) is a modern precise method to activate and study cortical functions noninvasively. We hypothesized that a combination of nTMS and functional magnetic resonance imaging (fMRI) could clarify the localization of functional areas involved with motor control and production of speech. Navigated repetitive TMS (rTMS) with short bursts was used to map speech areas on both hemispheres by inducing speech disruption during number recitation tasks in healthy volunteers. Two experienced video reviewers, blinded to the stimulated area, graded each trial offline according to possible speech disruption. The locations of speech disrupting nTMS trials were overlaid with fMRI activations of word generation task. Speech disruptions were produced on both hemispheres by nTMS, though there were more disruptive stimulation sites on the left hemisphere. Grade of the disruptions varied from subjective sensation to mild objectively recognizable disruption up to total speech arrest. The distribution of locations in which speech disruptions could be elicited varied among individuals. On the left hemisphere the locations of disturbing rTMS bursts with reviewers' verification followed the areas of fMRI activation. Similar pattern was not observed on the right hemisphere. The reviewer-verified speech disruptions induced by nTMS provided clinically relevant information, and fMRI might explain further the function of the cortical area. nTMS and fMRI complement each other, and their combination should be advocated when assessing individual localization of speech network. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Song and speech: examining the link between singing talent and speech imitation ability.

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory.

  7. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  8. Aerosol emission during human speech

    Science.gov (United States)

    Asadi, Sima; Wexler, Anthony S.; Cappa, Christopher D.; Bouvier, Nicole M.; Barreda-Castanon, Santiago; Ristenpart, William D.

    2017-11-01

    We show that the rate of aerosol particle emission during healthy human speech is strongly correlated with the loudness (amplitude) of vocalization. Emission rates range from approximately 1 to 50 particles per second for quiet to loud amplitudes, regardless of language spoken (English, Spanish, Mandarin, or Arabic). Intriguingly, a small fraction of individuals behave as ``super emitters,'' consistently emitting an order of magnitude more aerosol particles than their peers. We interpret the results in terms of the eggressive flowrate during vocalization, which is known to vary significantly for different types of vocalization and for different individuals. The results suggest that individual speech patterns could affect the probability of airborne disease transmission. The results also provide a possible explanation for the existence of ``super spreaders'' who transmit pathogens much more readily than average and who play a key role in the spread of epidemics.

  9. Mechanisms underlying speech sound discrimination and categorization in humans and zebra finches

    NARCIS (Netherlands)

    Burgering, Merel A.; ten Cate, Carel; Vroomen, Jean

    Speech sound categorization in birds seems in many ways comparable to that by humans, but it is unclear what mechanisms underlie such categorization. To examine this, we trained zebra finches and humans to discriminate two pairs of edited speech sounds that varied either along one dimension (vowel

  10. Motor contagion during human-human and human-robot interaction.

    Directory of Open Access Journals (Sweden)

    Ambra Bisio

    Full Text Available Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot. After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  11. Motor contagion during human-human and human-robot interaction.

    Science.gov (United States)

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  12. High gamma oscillations in medial temporal lobe during overt production of speech and gestures.

    Science.gov (United States)

    Marstaller, Lars; Burianová, Hana; Sowman, Paul F

    2014-01-01

    The study of the production of co-speech gestures (CSGs), i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15-25 Hz), which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50-90 Hz), which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.

  13. High gamma oscillations in medial temporal lobe during overt production of speech and gestures.

    Directory of Open Access Journals (Sweden)

    Lars Marstaller

    Full Text Available The study of the production of co-speech gestures (CSGs, i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15-25 Hz, which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50-90 Hz, which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.

  14. Associations among measures of sequential processing in motor and linguistics tasks in adults with and without a family history of childhood apraxia of speech: a replication study.

    Science.gov (United States)

    Button, Le; Peter, Beate; Stoel-Gammon, Carol; Raskind, Wendy H

    2013-03-01

    The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically related adults from a family with familial CAS showed motor sequencing deficits in an alternating motor speech task. Compared with the other adults, these three participants showed deficits in tasks requiring high loads of sequential processing, including nonword imitation, nonword reading and spelling. Qualitative error analyses in real word and nonword imitations revealed group differences in phoneme sequencing errors. Motor sequencing ability was correlated with phoneme sequencing errors during real word and nonword imitation, reading and spelling. Correlations were characterized by extremely high scores in one family and extremely low scores in another. Results are consistent with a central deficit in sequential processing in CAS of familial origin.

  15. Song and speech: examining the link between singing talent and speech imitation ability

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M.

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of “speech” on the productive level and “music” on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory. PMID:24319438

  16. Progressive apraxia of speech as a window into the study of speech planning processes.

    Science.gov (United States)

    Laganaro, Marina; Croisier, Michèle; Bagou, Odile; Assal, Frédéric

    2012-09-01

    We present a 3-year follow-up study of a patient with progressive apraxia of speech (PAoS), aimed at investigating whether the theoretical organization of phonetic encoding is reflected in the progressive disruption of speech. As decreased speech rate was the most striking pattern of disruption during the first 2 years, durational analyses were carried out longitudinally on syllables excised from spontaneous, repetition and reading speech samples. The crucial result of the present study is the demonstration of an effect of syllable frequency on duration: the progressive disruption of articulation rate did not affect all syllables in the same way, but followed a gradient that was function of the frequency of use of syllable-sized motor programs. The combination of data from this case of PAoS with previous psycholinguistic and neurolinguistic data, points to a frequency organization of syllable-sized speech-motor plans. In this study we also illustrate how studying PAoS can be exploited in theoretical and clinical investigations of phonetic encoding as it represents a unique opportunity to investigate speech while it progressively disrupts. Copyright © 2011 Elsevier Srl. All rights reserved.

  17. Effect of speech-intrinsic variations on human and automatic recognition of spoken phonemes.

    Science.gov (United States)

    Meyer, Bernd T; Brand, Thomas; Kollmeier, Birger

    2011-01-01

    The aim of this study is to quantify the gap between the recognition performance of human listeners and an automatic speech recognition (ASR) system with special focus on intrinsic variations of speech, such as speaking rate and effort, altered pitch, and the presence of dialect and accent. Second, it is investigated if the most common ASR features contain all information required to recognize speech in noisy environments by using resynthesized ASR features in listening experiments. For the phoneme recognition task, the ASR system achieved the human performance level only when the signal-to-noise ratio (SNR) was increased by 15 dB, which is an estimate for the human-machine gap in terms of the SNR. The major part of this gap is attributed to the feature extraction stage, since human listeners achieve comparable recognition scores when the SNR difference between unaltered and resynthesized utterances is 10 dB. Intrinsic variabilities result in strong increases of error rates, both in human speech recognition (HSR) and ASR (with a relative increase of up to 120%). An analysis of phoneme duration and recognition rates indicates that human listeners are better able to identify temporal cues than the machine at low SNRs, which suggests incorporating information about the temporal dynamics of speech into ASR systems.

  18. The Hierarchical Cortical Organization of Human Speech Processing.

    Science.gov (United States)

    de Heer, Wendy A; Huth, Alexander G; Griffiths, Thomas L; Gallant, Jack L; Theunissen, Frédéric E

    2017-07-05

    Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to

  19. From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems.

    Science.gov (United States)

    Yildiz, Izzet B; von Kriegstein, Katharina; Kiebel, Stefan J

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents-an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.

  20. From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems.

    Directory of Open Access Journals (Sweden)

    Izzet B Yildiz

    Full Text Available Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents-an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.

  1. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations...... within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...... human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...

  2. Primate vocal communication: a useful tool for understanding human speech and language evolution?

    Science.gov (United States)

    Fedurek, Pawel; Slocombe, Katie E

    2011-04-01

    Language is a uniquely human trait, and questions of how and why it evolved have been intriguing scientists for years. Nonhuman primates (primates) are our closest living relatives, and their behavior can be used to estimate the capacities of our extinct ancestors. As humans and many primate species rely on vocalizations as their primary mode of communication, the vocal behavior of primates has been an obvious target for studies investigating the evolutionary roots of human speech and language. By studying the similarities and differences between human and primate vocalizations, comparative research has the potential to clarify the evolutionary processes that shaped human speech and language. This review examines some of the seminal and recent studies that contribute to our knowledge regarding the link between primate calls and human language and speech. We focus on three main aspects of primate vocal behavior: functional reference, call combinations, and vocal learning. Studies in these areas indicate that despite important differences, primate vocal communication exhibits some key features characterizing human language. They also indicate, however, that some critical aspects of speech, such as vocal plasticity, are not shared with our primate cousins. We conclude that comparative research on primate vocal behavior is a very promising tool for deepening our understanding of the evolution of human speech and language, but much is still to be done as many aspects of monkey and ape vocalizations remain largely unexplored.

  3. Human speech articulator measurements using low power, 2GHz Homodyne sensors

    International Nuclear Information System (INIS)

    Barnes, T; Burnett, G C; Holzrichter, J F

    1999-01-01

    Very low power, short-range microwave ''radar-like'' sensors can measure the motions and vibrations of internal human speech articulators as speech is produced. In these animate (and also in inanimate acoustic systems) microwave sensors can measure vibration information associated with excitation sources and other interfaces. These data, together with the corresponding acoustic data, enable the calculation of system transfer functions. This information appears to be useful for a surprisingly wide range of applications such as speech coding and recognition, speaker or object identification, speech and musical instrument synthesis, noise cancellation, and other applications

  4. Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language

    Science.gov (United States)

    Narayanan, Shrikanth; Georgiou, Panayiotis G.

    2013-01-01

    The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion. PMID:24039277

  5. Comparison of Forced-Alignment Speech Recognition and Humans for Generating Reference VAD

    DEFF Research Database (Denmark)

    Kraljevski, Ivan; Tan, Zheng-Hua; Paola Bissiri, Maria

    2015-01-01

    This present paper aims to answer the question whether forced-alignment speech recognition can be used as an alternative to humans in generating reference Voice Activity Detection (VAD) transcriptions. An investigation of the level of agreement between automatic/manual VAD transcriptions and the ......This present paper aims to answer the question whether forced-alignment speech recognition can be used as an alternative to humans in generating reference Voice Activity Detection (VAD) transcriptions. An investigation of the level of agreement between automatic/manual VAD transcriptions...... and the reference ones produced by a human expert was carried out. Thereafter, statistical analysis was employed on the automatically produced and the collected manual transcriptions. Experimental results confirmed that forced-alignment speech recognition can provide accurate and consistent VAD labels....

  6. Why the Left Hemisphere Is Dominant for Speech Production: Connecting the Dots

    Directory of Open Access Journals (Sweden)

    Harvey Martin Sussman

    2015-12-01

    Full Text Available Evidence from seemingly disparate areas of speech/language research is reviewed to form a unified theoretical account for why the left hemisphere is specialized for speech production. Research findings from studies investigating hemispheric lateralization of infant babbling, the primacy of the syllable in phonological structure, rhyming performance in split-brain patients, rhyming ability and phonetic categorization in children diagnosed with developmental apraxia of speech, rules governing exchange errors in spoonerisms, organizational principles of neocortical control of learned motor behaviors, and multi-electrode recordings of human neuronal responses to speech sounds are described and common threads highlighted. It is suggested that the emergence, in developmental neurogenesis, of a hard-wired, syllabically-organized, neural substrate representing the phonemic sound elements of one’s language, particularly the vocalic nucleus, is the crucial factor underlying the left hemisphere’s dominance for speech production.

  7. Detection of cardiac activity changes from human speech

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav; Mikulec, Martin; Mehic, Miralem

    2015-05-01

    Impact of changes in blood pressure and pulse from human speech is disclosed in this article. The symptoms of increased physical activity are pulse, systolic and diastolic pressure. There are many methods of measuring and indicating these parameters. The measurements must be carried out using devices which are not used in everyday life. In most cases, the measurement of blood pressure and pulse following health problems or other adverse feelings. Nowadays, research teams are trying to design and implement modern methods in ordinary human activities. The main objective of the proposal is to reduce the delay between detecting the adverse pressure and to the mentioned warning signs and feelings. Common and frequent activity of man is speaking, while it is known that the function of the vocal tract can be affected by the change in heart activity. Therefore, it can be a useful parameter for detecting physiological changes. A method for detecting human physiological changes by speech processing and artificial neural network classification is described in this article. The pulse and blood pressure changes was induced by physical exercises in this experiment. The set of measured subjects was formed by ten healthy volunteers of both sexes. None of the subjects was a professional athlete. The process of the experiment was divided into phases before, during and after physical training. Pulse, systolic, diastolic pressure was measured and voice activity was recorded after each of them. The results of this experiment describe a method for detecting increased cardiac activity from human speech using artificial neural network.

  8. Human speech articulator measurements using low power, 2GHz Homodyne sensors

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, T; Burnett, G C; Holzrichter, J F

    1999-06-29

    Very low power, short-range microwave ''radar-like'' sensors can measure the motions and vibrations of internal human speech articulators as speech is produced. In these animate (and also in inanimate acoustic systems) microwave sensors can measure vibration information associated with excitation sources and other interfaces. These data, together with the corresponding acoustic data, enable the calculation of system transfer functions. This information appears to be useful for a surprisingly wide range of applications such as speech coding and recognition, speaker or object identification, speech and musical instrument synthesis, noise cancellation, and other applications.

  9. Interventions for Speech Sound Disorders in Children

    Science.gov (United States)

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  10. Longitudinal decline in speech production in Parkinson's disease spectrum disorders.

    Science.gov (United States)

    Ash, Sharon; Jester, Charles; York, Collin; Kofman, Olga L; Langey, Rachel; Halpin, Amy; Firn, Kim; Dominguez Perez, Sophia; Chahine, Lama; Spindler, Meredith; Dahodwala, Nabila; Irwin, David J; McMillan, Corey; Weintraub, Daniel; Grossman, Murray

    2017-08-01

    We examined narrative speech production longitudinally in non-demented (n=15) and mildly demented (n=8) patients with Parkinson's disease spectrum disorder (PDSD), and we related increasing impairment to structural brain changes in specific language and motor regions. Patients provided semi-structured speech samples, describing a standardized picture at two time points (mean±SD interval=38±24months). The recorded speech samples were analyzed for fluency, grammar, and informativeness. PDSD patients with dementia exhibited significant decline in their speech, unrelated to changes in overall cognitive or motor functioning. Regression analysis in a subset of patients with MRI scans (n=11) revealed that impaired language performance at Time 2 was associated with reduced gray matter (GM) volume at Time 1 in regions of interest important for language functioning but not with reduced GM volume in motor brain areas. These results dissociate language and motor systems and highlight the importance of non-motor brain regions for declining language in PDSD. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Human motor unit recordings: origins and insight into the integrated motor system.

    Science.gov (United States)

    Duchateau, Jacques; Enoka, Roger M

    2011-08-29

    Soon after Edward Liddell [1895-1981] and Charles Sherrington [1857-1952] introduced the concept of a motor unit in 1925 and the necessary technology was developed, the recording of single motor unit activity became feasible in humans. It was quickly discovered by Edgar Adrian [1889-1977] and Detlev Bronk [1897-1975] that the force exerted by muscle during voluntary contractions was the result of the concurrent recruitment of motor units and modulation of the rate at which they discharged action potentials. Subsequent studies found that the relation between discharge frequency and motor unit force was characterized by a sigmoidal function. Based on observations on experimental animals, Elwood Henneman [1915-1996] proposed a "size principle" in 1957 and most studies in humans focussed on validating this concept during various types of muscle contractions. By the end of the 20th C, the experimental evidence indicated that the recruitment order of human motor units was determined primarily by motoneuron size and that the occasional changes in recruitment order were not an intended strategy of the central nervous system. Fundamental knowledge on the function of Sherrington's "common final pathway" was expanded with observations on motor unit rotation, minimal and maximal discharge rates, discharge variability, and self-sustained firing. Despite the great amount of work on characterizing motor unit activity during the first century of inquiry, however, many basic questions remain unanswered and these limit the extent to which findings on humans and experimental animals can be integrated and generalized to all movements. 2011 Elsevier B.V. All rights reserved.

  12. Behavioural, computational, and neuroimaging studies of acquired apraxia of speech

    Directory of Open Access Journals (Sweden)

    Kirrie J Ballard

    2014-11-01

    Full Text Available A critical examination of speech motor control depends on an in-depth understanding of network connectivity associated with Brodmann areas 44 and 45 and surrounding cortices. Damage to these areas has been associated with two conditions - the speech motor programming disorder apraxia of speech (AOS and the linguistic / grammatical disorder of Broca’s aphasia. Here we focus on AOS, which is most commonly associated with damage to posterior Broca's area and adjacent cortex. We provide an overview of our own studies into the nature of AOS, including behavioral and neuroimaging methods, to explore components of the speech motor network that are associated with normal and disordered speech motor programming in AOS. Behavioral, neuroimaging, and computational modeling studies are indicating that AOS is associated with impairment in learning feedforward models and/or implementing feedback mechanisms and with the functional contribution of BA6. While functional connectivity methods are not yet routinely applied to the study of AOS, we highlight the need for focusing on the functional impact of localised lesions throughout the speech network, as well as larger scale comparative studies to distinguish the unique behavioral and neurological signature of AOS. By coupling these methods with neural network models, we have a powerful set of tools to improve our understanding of the neural mechanisms that underlie AOS, and speech production generally.

  13. Understanding the nature of apraxia of speech: Theory, analysis, and treatment

    Directory of Open Access Journals (Sweden)

    Kirrie J. Ballard

    2010-08-01

    Full Text Available Researchers have interpreted the behaviours of individuals with acquired apraxia of speech (AOS as impairment of linguistic phonological processing, motor control, or both. Acoustic, kinematic, and perceptual studies of speech in more recent years have led to significant advances in our understanding of the disorder and wide acceptance that it affects phonetic - motoric planning of speech. However, newly developed methods for studying nonspeech motor control are providing new insights, indicating that the motor control impairment of AOS extends beyond speech and is manifest in nonspeech movements of the oral structures. We present the most recent developments in theory and methods to examine and define the nature of AOS. Theories of the disorder are then related to existing treatment approaches and the efficacy of these approaches is examined. Directions for development of new treatments are posited. It is proposed that treatment programmes driven by a principled account of how the motor system learns to produce skilled actions will provide the most efficient and effective framework for treating motorbased speech disorders. In turn, well controlled and theoretically motivated studies of treatment efficacy promise to stimulate further development of theoretical accounts and contribute to our understanding of AOS.

  14. Memory for speech and speech for memory.

    Science.gov (United States)

    Locke, J L; Kutz, K J

    1975-03-01

    Thirty kindergarteners, 15 who substituted /w/ for /r/ and 15 with correct articulation, received two perception tests and a memory test that included /w/ and /r/ in minimally contrastive syllables. Although both groups had nearly perfect perception of the experimenter's productions of /w/ and /r/, misarticulating subjects perceived their own tape-recorded w/r productions as /w/. In the memory task these same misarticulating subjects committed significantly more /w/-/r/ confusions in unspoken recall. The discussion considers why people subvocally rehearse; a developmental period in which children do not rehearse; ways subvocalization may aid recall, including motor and acoustic encoding; an echoic store that provides additional recall support if subjects rehearse vocally, and perception of self- and other- produced phonemes by misarticulating children-including its relevance to a motor theory of perception. Evidence is presented that speech for memory can be sufficiently impaired to cause memory disorder. Conceptions that restrict speech disorder to an impairment of communication are challenged.

  15. Speech recovery and language plasticity can be facilitated by Sensori-Motor Fusion training in chronic non-fluent aphasia. A case report study.

    Science.gov (United States)

    Haldin, Célise; Acher, Audrey; Kauffmann, Louise; Hueber, Thomas; Cousin, Emilie; Badin, Pierre; Perrier, Pascal; Fabre, Diandra; Perennou, Dominic; Detante, Olivier; Jaillard, Assia; Lœvenbruck, Hélène; Baciu, Monica

    2017-11-17

    The rehabilitation of speech disorders benefits from providing visual information which may improve speech motor plans in patients. We tested the proof of concept of a rehabilitation method (Sensori-Motor Fusion, SMF; Ultraspeech player) in one post-stroke patient presenting chronic non-fluent aphasia. SMF allows visualisation by the patient of target tongue and lips movements using high-speed ultrasound and video imaging. This can improve the patient's awareness of his/her own lingual and labial movements, which can, in turn, improve the representation of articulatory movements and increase the ability to coordinate and combine articulatory gestures. The auditory and oro-sensory feedback received by the patient as a result of his/her own pronunciation can be integrated with the target articulatory movements they watch. Thus, this method is founded on sensorimotor integration during speech. The SMF effect on this patient was assessed through qualitative comparison of language scores and quantitative analysis of acoustic parameters measured in a speech production task, before and after rehabilitation. We also investigated cerebral patterns of language reorganisation for rhyme detection and syllable repetition, to evaluate the influence of SMF on phonological-phonetic processes. Our results showed that SMF had a beneficial effect on this patient who qualitatively improved in naming, reading, word repetition and rhyme judgment tasks. Quantitative measurements of acoustic parameters indicate that the patient's production of vowels and syllables also improved. Compared with pre-SMF, the fMRI data in the post-SMF session revealed the activation of cerebral regions related to articulatory, auditory and somatosensory processes, which were expected to be recruited by SMF. We discuss neurocognitive and linguistic mechanisms which may explain speech improvement after SMF, as well as the advantages of using this speech rehabilitation method.

  16. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem.

    Directory of Open Access Journals (Sweden)

    Cai Wingfield

    2017-09-01

    Full Text Available There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental 'machine states', generated as the ASR analysis progresses over time, to the incremental 'brain states', measured using combined electro- and magneto-encephalography (EMEG, generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain.

  17. Speech misperception: speaking and seeing interfere differently with hearing.

    Directory of Open Access Journals (Sweden)

    Takemi Mochida

    Full Text Available Speech perception is thought to be linked to speech motor production. This linkage is considered to mediate multimodal aspects of speech perception, such as audio-visual and audio-tactile integration. However, direct coupling between articulatory movement and auditory perception has been little studied. The present study reveals a clear dissociation between the effects of a listener's own speech action and the effects of viewing another's speech movements on the perception of auditory phonemes. We assessed the intelligibility of the syllables [pa], [ta], and [ka] when listeners silently and simultaneously articulated syllables that were congruent/incongruent with the syllables they heard. The intelligibility was compared with a condition where the listeners simultaneously watched another's mouth producing congruent/incongruent syllables, but did not articulate. The intelligibility of [ta] and [ka] were degraded by articulating [ka] and [ta] respectively, which are associated with the same primary articulator (tongue as the heard syllables. But they were not affected by articulating [pa], which is associated with a different primary articulator (lips from the heard syllables. In contrast, the intelligibility of [ta] and [ka] was degraded by watching the production of [pa]. These results indicate that the articulatory-induced distortion of speech perception occurs in an articulator-specific manner while visually induced distortion does not. The articulator-specific nature of the auditory-motor interaction in speech perception suggests that speech motor processing directly contributes to our ability to hear speech.

  18. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  19. Patterns of poststroke brain damage that predict speech production errors in apraxia of speech and aphasia dissociate.

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-06-01

    Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions on whether AOS emerges from a unique pattern of brain damage or as a subelement of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The AOS Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with both AOS and aphasia. Localized brain damage was identified using structural magnetic resonance imaging, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS or aphasia, and brain damage. The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS or aphasia were associated with damage to the temporal lobe and the inferior precentral frontal regions. AOS likely occurs in conjunction with aphasia because of the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. © 2015 American Heart Association, Inc.

  20. [Clinical characteristics and speech therapy of lingua-apical articulation disorder].

    Science.gov (United States)

    Zhang, Feng-hua; Jin, Xing-ming; Zhang, Yi-wen; Wu, Hong; Jiang, Fan; Shen, Xiao-ming

    2006-03-01

    To explore the clinical characteristics and speech therapy of 62 children with lingua-apical articulation disorder. Peabody Picture Vocabulary Test (PPVT), Gesell development scales (Gesell), Wechsler Intelligence Scale for Preschool Children (WPPSI) and speech test were performed for 62 children at the ages of 3 to 8 years with lingua-apical articulation disorder. PPVT was used to measure receptive vocabulary skills. GESELL and WPPSI were utilized to represent cognitive and non-verbal ability. The speech test was adopted to assess the speech development. The children received speech therapy and auxiliary oral-motor functional training once or twice a week. Firstly the target sound was identified according to the speech development milestone, then the method of speech localization was used to clarify the correct articulation placement and manner. It was needed to change food character and administer oral-motor functional training for children with oral motor dysfunction. The 62 cases with the apical articulation disorder were classified into four groups. The combined pattern of the articulation disorder was the most common (40 cases, 64.5%), the next was apico-dental disorder (15 cases, 24.2%). The third was palatal disorder (4 cases, 6.5%) and the last one was the linguo-alveolar disorder (3 cases, 4.8%). The substitution errors of velar were the most common (95.2%), the next was omission errors (30.6%) and the last was absence of aspiration (12.9%). Oral motor dysfunction was found in some children with problems such as disordered joint movement of tongue and head, unstable jaw, weak tongue strength and poor coordination of tongue movement. Some children had feeding problems such as preference of eating soft food, keeping food in mouths, eating slowly, and poor chewing. After 5 to 18 times of therapy, the effective rate of speech therapy reached 82.3%. The lingua-apical articulation disorders can be classified into four groups. The combined pattern of the

  1. Patterns of Post-Stroke Brain Damage that Predict Speech Production Errors in Apraxia of Speech and Aphasia Dissociate

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-01-01

    Background and Purpose Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions regarding if AOS emerges from a unique pattern of brain damage or as a sub-element of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Methods Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The Apraxia of Speech Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with AOS and/or aphasia. Localized brain damage was identified using structural MRI, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS and/or aphasia, and brain damage. Results The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS and/or aphasia were associated with damage to the temporal lobe and the inferior pre-central frontal regions. Conclusion AOS likely occurs in conjunction with aphasia due to the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. PMID:25908457

  2. Excitability of the motor system: A transcranial magnetic stimulation study on singing and speaking.

    Science.gov (United States)

    Royal, Isabelle; Lidji, Pascale; Théoret, Hugo; Russo, Frank A; Peretz, Isabelle

    2015-08-01

    The perception of movements is associated with increased activity in the human motor cortex, which in turn may underlie our ability to understand actions, as it may be implicated in the recognition, understanding and imitation of actions. Here, we investigated the involvement and lateralization of the primary motor cortex (M1) in the perception of singing and speech. Transcranial magnetic stimulation (TMS) was applied independently for both hemispheres over the mouth representation of the motor cortex in healthy participants while they watched 4-s audiovisual excerpts of singers producing a 2-note ascending interval (singing condition) or 4-s audiovisual excerpts of a person explaining a proverb (speech condition). Subjects were instructed to determine whether a sung interval/written proverb, matched a written interval/proverb. During both tasks, motor evoked potentials (MEPs) were recorded from the contralateral mouth muscle (orbicularis oris) of the stimulated motor cortex compared to a control task. Moreover, to investigate the time course of motor activation, TMS pulses were randomly delivered at 7 different time points (ranging from 500 to 3500 ms after stimulus onset). Results show that stimulation of the right hemisphere had a similar effect on the MEPs for both the singing and speech perception tasks, whereas stimulation of the left hemisphere significantly differed in the speech perception task compared to the singing perception task. Furthermore, analysis of the MEPs in the singing task revealed that they decreased for small musical intervals, but increased for large musical intervals, regardless of which hemisphere was stimulated. Overall, these results suggest a dissociation between the lateralization of M1 activity for speech perception and for singing perception, and that in the latter case its activity can be modulated by musical parameters such as the size of a musical interval. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Relative Contributions of the Dorsal vs. Ventral Speech Streams to Speech Perception are Context Dependent: a lesion study

    Directory of Open Access Journals (Sweden)

    Corianne Rogalsky

    2014-04-01

    Full Text Available The neural basis of speech perception has been debated for over a century. While it is generally agreed that the superior temporal lobes are critical for the perceptual analysis of speech, a major current topic is whether the motor system contributes to speech perception, with several conflicting findings attested. In a dorsal-ventral speech stream framework (Hickok & Poeppel 2007, this debate is essentially about the roles of the dorsal versus ventral speech processing streams. A major roadblock in characterizing the neuroanatomy of speech perception is task-specific effects. For example, much of the evidence for dorsal stream involvement comes from syllable discrimination type tasks, which have been found to behaviorally doubly dissociate from auditory comprehension tasks (Baker et al. 1981. Discrimination task deficits could be a result of difficulty perceiving the sounds themselves, which is the typical assumption, or it could be a result of failures in temporary maintenance of the sensory traces, or the comparison and/or the decision process. Similar complications arise in perceiving sentences: the extent of inferior frontal (i.e. dorsal stream activation during listening to sentences increases as a function of increased task demands (Love et al. 2006. Another complication is the stimulus: much evidence for dorsal stream involvement uses speech samples lacking semantic context (CVs, non-words. The present study addresses these issues in a large-scale lesion-symptom mapping study. 158 patients with focal cerebral lesions from the Mutli-site Aphasia Research Consortium underwent a structural MRI or CT scan, as well as an extensive psycholinguistic battery. Voxel-based lesion symptom mapping was used to compare the neuroanatomy involved in the following speech perception tasks with varying phonological, semantic, and task loads: (i two discrimination tasks of syllables (non-words and words, respectively, (ii two auditory comprehension tasks

  4. Motor Speech Apraxia in a 70-Year-Old Man with Left Dorsolateral Frontal Arachnoid Cyst: A [18F]FDG PET-CT Study

    Directory of Open Access Journals (Sweden)

    Nicolaas I. Bohnen

    2016-01-01

    Full Text Available Motor speech apraxia is a speech disorder of impaired syllable sequencing which, when seen with advancing age, is suggestive of a neurodegenerative process affecting cortical structures in the left frontal lobe. Arachnoid cysts can be associated with neurologic symptoms due to compression of underlying brain structures though indications for surgical intervention are unclear. We present the case of a 70-year-old man who presented with a two-year history of speech changes along with decreased initiation and talkativeness, shorter utterances, and dysnomia. [18F]Fluorodeoxyglucose (FDG Positron Emission and Computed Tomography (PET-CT and magnetic resonance imaging (MRI showed very focal left frontal cortical hypometabolism immediately adjacent to an arachnoid cyst but no specific evidence of a neurodegenerative process.

  5. The Role of Broca's Area in Speech Perception: Evidence from Aphasia Revisited

    Science.gov (United States)

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-01-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that…

  6. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  7. Speech Motor Development in Childhood Apraxia of Speech : Generating Testable Hypotheses by Neurocomputational Modeling

    NARCIS (Netherlands)

    Terband, H.; Maassen, B.

    2010-01-01

    Childhood apraxia of speech (CAS) is a highly controversial clinical entity, with respect to both clinical signs and underlying neuromotor deficit. In the current paper, we advocate a modeling approach in which a computational neural model of speech acquisition and production is utilized in order to

  8. Speech motor development in childhood apraxia of speech: generating testable hypotheses by neurocomputational modeling.

    NARCIS (Netherlands)

    Terband, H.R.; Maassen, B.A.M.

    2010-01-01

    Childhood apraxia of speech (CAS) is a highly controversial clinical entity, with respect to both clinical signs and underlying neuromotor deficit. In the current paper, we advocate a modeling approach in which a computational neural model of speech acquisition and production is utilized in order to

  9. Motor function domains in alternating hemiplegia of childhood.

    Science.gov (United States)

    Masoud, Melanie; Gordon, Kelly; Hall, Amanda; Jasien, Joan; Lardinois, Kara; Uchitel, Julie; Mclean, Melissa; Prange, Lyndsey; Wuchich, Jeffrey; Mikati, Mohamad A

    2017-08-01

    To characterize motor function profiles in alternating hemiplegia of childhood, and to investigate interrelationships between these domains and with age. We studied a cohort of 23 patients (9 males, 14 females; mean age 9y 4mo, range 4mo-43y) who underwent standardized tests to assess gross motor, upper extremity motor control, motor speech, and dysphagia functions. Gross Motor Function Classification System (GMFCS), Gross Motor Function Measure-88 (GMFM-88), Manual Ability Classification System (MACS), and Revised Melbourne Assessment (MA2) scales manifested predominantly mild impairments; motor speech, moderate to severe; Modified Dysphagia Outcome and Severity Scale (M-DOSS), mild-to moderate deficits. GMFCS correlated with GMFM-88 scores (Pearson's correlation, p=0.002), MACS (p=0.038), and MA2 fluency (p=0.005) and accuracy (p=0.038) scores. GMFCS did not correlate with motor speech (p=0.399), MA2 dexterity (p=0.247), range of motion (p=0.063), or M-DOSS (p=0.856). Motor speech was more severely impaired than the GMFCS (p<0.013). There was no correlation between any of the assessment tools and age (p=0.210-0.798). Our data establish a detailed profile of motor function in alternating hemiplegia of childhood, argue against the presence of worse motor function in older patients, identify tools helpful in evaluating this population, and identify oropharyngeal function as the more severely affected domain, suggesting that brain areas controlling this function are more affected than others. © 2017 Mac Keith Press.

  10. Prosodic influences on speech production in children with specific language impairment and speech deficits: kinematic, acoustic, and transcription evidence.

    Science.gov (United States)

    Goffman, L

    1999-12-01

    It is often hypothesized that young children's difficulties with producing weak-strong (iambic) prosodic forms arise from perceptual or linguistically based production factors. A third possible contributor to errors in the iambic form may be biological constraints, or biases, of the motor system. In the present study, 7 children with specific language impairment (SLI) and speech deficits were matched to same age peers. Multiple levels of analysis, including kinematic (modulation and stability of movement), acoustic, and transcription, were applied to children's productions of iambic (weak-strong) and trochaic (strong-weak) prosodic forms. Findings suggest that a motor bias toward producing unmodulated rhythmic articulatory movements, similar to that observed in canonical babbling, contribute to children's acquisition of metrical forms. Children with SLI and speech deficits show less mature segmental and speech motor systems, as well as decreased modulation of movement in later developing iambic forms. Further, components of prosodic and segmental acquisition develop independently and at different rates.

  11. A common functional neural network for overt production of speech and gesture.

    Science.gov (United States)

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. An evaluation of the effectiveness of PROMPT therapy in improving speech production accuracy in six children with cerebral palsy.

    Science.gov (United States)

    Ward, Roslyn; Leitão, Suze; Strauss, Geoff

    2014-08-01

    This study evaluates perceptual changes in speech production accuracy in six children (3-11 years) with moderate-to-severe speech impairment associated with cerebral palsy before, during, and after participation in a motor-speech intervention program (Prompts for Restructuring Oral Muscular Phonetic Targets). An A1BCA2 single subject research design was implemented. Subsequent to the baseline phase (phase A1), phase B targeted each participant's first intervention priority on the PROMPT motor-speech hierarchy. Phase C then targeted one level higher. Weekly speech probes were administered, containing trained and untrained words at the two levels of intervention, plus an additional level that served as a control goal. The speech probes were analysed for motor-speech-movement-parameters and perceptual accuracy. Analysis of the speech probe data showed all participants recorded a statistically significant change. Between phases A1-B and B-C 6/6 and 4/6 participants, respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns targeted during the training of that intervention. The preliminary data presented in this study make a contribution to providing evidence that supports the use of a treatment approach aligned with dynamic systems theory to improve the motor-speech movement patterns and speech production accuracy in children with cerebral palsy.

  13. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech

  14. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    NARCIS (Netherlands)

    Kriengwatana, B.; Escudero, P.; Kerkhoven, A.H.; ten Cate, C.

    2015-01-01

    Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still

  15. Impaired Feedforward Control and Enhanced Feedback Control of Speech in Patients with Cerebellar Degeneration.

    Science.gov (United States)

    Parrell, Benjamin; Agnew, Zarinah; Nagarajan, Srikantan; Houde, John; Ivry, Richard B

    2017-09-20

    The cerebellum has been hypothesized to form a crucial part of the speech motor control network. Evidence for this comes from patients with cerebellar damage, who exhibit a variety of speech deficits, as well as imaging studies showing cerebellar activation during speech production in healthy individuals. To date, the precise role of the cerebellum in speech motor control remains unclear, as it has been implicated in both anticipatory (feedforward) and reactive (feedback) control. Here, we assess both anticipatory and reactive aspects of speech motor control, comparing the performance of patients with cerebellar degeneration and matched controls. Experiment 1 tested feedforward control by examining speech adaptation across trials in response to a consistent perturbation of auditory feedback. Experiment 2 tested feedback control, examining online corrections in response to inconsistent perturbations of auditory feedback. Both male and female patients and controls were tested. The patients were impaired in adapting their feedforward control system relative to controls, exhibiting an attenuated anticipatory response to the perturbation. In contrast, the patients produced even larger compensatory responses than controls, suggesting an increased reliance on sensory feedback to guide speech articulation in this population. Together, these results suggest that the cerebellum is crucial for maintaining accurate feedforward control of speech, but relatively uninvolved in feedback control. SIGNIFICANCE STATEMENT Speech motor control is a complex activity that is thought to rely on both predictive, feedforward control as well as reactive, feedback control. While the cerebellum has been shown to be part of the speech motor control network, its functional contribution to feedback and feedforward control remains controversial. Here, we use real-time auditory perturbations of speech to show that patients with cerebellar degeneration are impaired in adapting feedforward control of

  16. Dynamic encoding of speech sequence probability in human temporal cortex.

    Science.gov (United States)

    Leonard, Matthew K; Bouchard, Kristofer E; Tang, Claire; Chang, Edward F

    2015-05-06

    Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. Copyright © 2015 the authors 0270-6474/15/357203-12$15.00/0.

  17. The selective role of premotor cortex in speech perception: a contribution to phoneme judgements but not speech comprehension.

    Science.gov (United States)

    Krieger-Redwood, Katya; Gaskell, M Gareth; Lindsay, Shane; Jefferies, Elizabeth

    2013-12-01

    Several accounts of speech perception propose that the areas involved in producing language are also involved in perceiving it. In line with this view, neuroimaging studies show activation of premotor cortex (PMC) during phoneme judgment tasks; however, there is debate about whether speech perception necessarily involves motor processes, across all task contexts, or whether the contribution of PMC is restricted to tasks requiring explicit phoneme awareness. Some aspects of speech processing, such as mapping sounds onto meaning, may proceed without the involvement of motor speech areas if PMC specifically contributes to the manipulation and categorical perception of phonemes. We applied TMS to three sites-PMC, posterior superior temporal gyrus, and occipital pole-and for the first time within the TMS literature, directly contrasted two speech perception tasks that required explicit phoneme decisions and mapping of speech sounds onto semantic categories, respectively. TMS to PMC disrupted explicit phonological judgments but not access to meaning for the same speech stimuli. TMS to two further sites confirmed that this pattern was site specific and did not reflect a generic difference in the susceptibility of our experimental tasks to TMS: stimulation of pSTG, a site involved in auditory processing, disrupted performance in both language tasks, whereas stimulation of occipital pole had no effect on performance in either task. These findings demonstrate that, although PMC is important for explicit phonological judgments, crucially, PMC is not necessary for mapping speech onto meanings.

  18. Implications of diadochokinesia in children with speech sound disorder.

    Science.gov (United States)

    Wertzner, Haydée Fiszbein; Pagan-Neves, Luciana de Oliveira; Alves, Renata Ramos; Barrozo, Tatiane Faria

    2013-01-01

    To verify the performance of children with and without speech sound disorder in oral motor skills measured by oral diadochokinesia according to age and gender and to compare the results by two different methods of analysis. Participants were 72 subjects aged from 5 years to 7 years and 11 months divided into four subgroups according to the presence of speech sound disorder (Study Group and Control Group) and age (6 years and 5 months). Diadochokinesia skills were assessed by the repetition of the sequences 'pa', 'ta', 'ka' and 'pataka' measured both manually and by the software Motor Speech Profile®. Gender was statistically different for both groups but it did not influence on the number of sequences per second produced. Correlation between the number of sequences per second and age was observed for all sequences (except for 'ka') only for the control group children. Comparison between groups did not indicate differences between the number of sequences per second and age. Results presented strong agreement between the values of oral diadochokinesia measured manually and by MSP. This research demonstrated the importance of using different methods of analysis on the functional evaluation of oro-motor processing aspects of children with speech sound disorder and evidenced the oro-motor difficulties on children aged under than eight years old.

  19. Small intragenic deletion in FOXP2 associated with childhood apraxia of speech and dysarthria.

    Science.gov (United States)

    Turner, Samantha J; Hildebrand, Michael S; Block, Susan; Damiano, John; Fahey, Michael; Reilly, Sheena; Bahlo, Melanie; Scheffer, Ingrid E; Morgan, Angela T

    2013-09-01

    Relatively little is known about the neurobiological basis of speech disorders although genetic determinants are increasingly recognized. The first gene for primary speech disorder was FOXP2, identified in a large, informative family with verbal and oral dyspraxia. Subsequently, many de novo and familial cases with a severe speech disorder associated with FOXP2 mutations have been reported. These mutations include sequencing alterations, translocations, uniparental disomy, and genomic copy number variants. We studied eight probands with speech disorder and their families. Family members were phenotyped using a comprehensive assessment of speech, oral motor function, language, literacy skills, and cognition. Coding regions of FOXP2 were screened to identify novel variants. Segregation of the variant was determined in the probands' families. Variants were identified in two probands. One child with severe motor speech disorder had a small de novo intragenic FOXP2 deletion. His phenotype included features of childhood apraxia of speech and dysarthria, oral motor dyspraxia, receptive and expressive language disorder, and literacy difficulties. The other variant was found in a family in two of three family members with stuttering, and also in the mother with oral motor impairment. This variant was considered a benign polymorphism as it was predicted to be non-pathogenic with in silico tools and found in database controls. This is the first report of a small intragenic deletion of FOXP2 that is likely to be the cause of severe motor speech disorder associated with language and literacy problems. Copyright © 2013 Wiley Periodicals, Inc.

  20. Central Timing Deficits in Subtypes of Primary Speech Disorders

    Science.gov (United States)

    Peter, Beate; Stoel-Gammon, Carol

    2008-01-01

    Childhood apraxia of speech (CAS) is a proposed speech disorder subtype that interferes with motor planning and/or programming, affecting prosody in many cases. Pilot data (Peter & Stoel-Gammon, 2005) were consistent with the notion that deficits in timing accuracy in speech and music-related tasks may be associated with CAS. This study…

  1. Auditory-motor mapping training as an intervention to facilitate speech output in non-verbal children with autism: a proof of concept study.

    Directory of Open Access Journals (Sweden)

    Catherine Y Wan

    Full Text Available Although up to 25% of children with autism are non-verbal, there are very few interventions that can reliably produce significant improvements in speech output. Recently, a novel intervention called Auditory-Motor Mapping Training (AMMT has been developed, which aims to promote speech production directly by training the association between sounds and articulatory actions using intonation and bimanual motor activities. AMMT capitalizes on the inherent musical strengths of children with autism, and offers activities that they intrinsically enjoy. It also engages and potentially stimulates a network of brain regions that may be dysfunctional in autism. Here, we report an initial efficacy study to provide 'proof of concept' for AMMT. Six non-verbal children with autism participated. Prior to treatment, the children had no intelligible words. They each received 40 individual sessions of AMMT 5 times per week, over an 8-week period. Probe assessments were conducted periodically during baseline, therapy, and follow-up sessions. After therapy, all children showed significant improvements in their ability to articulate words and phrases, with generalization to items that were not practiced during therapy sessions. Because these children had no or minimal vocal output prior to treatment, the acquisition of speech sounds and word approximations through AMMT represents a critical step in expressive language development in children with autism.

  2. Effects of a Conversation-Based Intervention on the Linguistic Skills of Children with Motor Speech Disorders Who Use Augmentative and Alternative Communication

    Science.gov (United States)

    Soto, Gloria; Clarke, Michael T.

    2017-01-01

    Purpose: This study was conducted to evaluate the effects of a conversation-based intervention on the expressive vocabulary and grammatical skills of children with severe motor speech disorders and expressive language delay who use augmentative and alternative communication. Method: Eight children aged from 8 to 13 years participated in the study.…

  3. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    Full Text Available In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively

  4. Zebra finches are sensitive to prosodic features of human speech.

    Science.gov (United States)

    Spierings, Michelle J; ten Cate, Carel

    2014-07-22

    Variation in pitch, amplitude and rhythm adds crucial paralinguistic information to human speech. Such prosodic cues can reveal information about the meaning or emphasis of a sentence or the emotional state of the speaker. To examine the hypothesis that sensitivity to prosodic cues is language independent and not human specific, we tested prosody perception in a controlled experiment with zebra finches. Using a go/no-go procedure, subjects were trained to discriminate between speech syllables arranged in XYXY patterns with prosodic stress on the first syllable and XXYY patterns with prosodic stress on the final syllable. To systematically determine the salience of the various prosodic cues (pitch, duration and amplitude) to the zebra finches, they were subjected to five tests with different combinations of these cues. The zebra finches generalized the prosodic pattern to sequences that consisted of new syllables and used prosodic features over structural ones to discriminate between stimuli. This strong sensitivity to the prosodic pattern was maintained when only a single prosodic cue was available. The change in pitch was treated as more salient than changes in the other prosodic features. These results show that zebra finches are sensitive to the same prosodic cues known to affect human speech perception. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. Communication Deficits and the Motor System: Exploring Patterns of Associations in Autism Spectrum Disorder (ASD)

    Science.gov (United States)

    Mody, M.; Shui, A. M.; Nowinski, L. A.; Golas, S. B.; Ferrone, C.; O'Rourke, J. A.; McDougle, C. J.

    2017-01-01

    Many children with autism spectrum disorder (ASD) have notable difficulties in motor, speech and language domains. The connection between motor skills (oral-motor, manual-motor) and speech and language deficits reported in other developmental disorders raises important questions about a potential relationship between motor skills and…

  6. Effects of Feedback Frequency and Timing on Acquisition, Retention, and Transfer of Speech Skills in Acquired Apraxia of Speech

    Science.gov (United States)

    Hula, Shannon N. Austermann; Robin, Donald A.; Maas, Edwin; Ballard, Kirrie J.; Schmidt, Richard A.

    2008-01-01

    Purpose: Two studies examined speech skill learning in persons with apraxia of speech (AOS). Motor-learning research shows that delaying or reducing the frequency of feedback promotes retention and transfer of skills. By contrast, immediate or frequent feedback promotes temporary performance enhancement but interferes with retention and transfer.…

  7. Speech-associated gestures, Broca’s area, and the human mirror system

    Science.gov (United States)

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L

    2009-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements. PMID:17533001

  8. Vestibular stimulation after head injury: effect on reaction times and motor speech parameters

    DEFF Research Database (Denmark)

    Engberg, A

    1989-01-01

    Earlier studies by other authors indicate that vestibular stimulation may improve attention and dysarthria in head injured patients. In the present study of five severely head injured patients and five controls, the effect of vestibular stimulation on reaction times (reflecting attention) and some...... motor speech parameters (reflecting dysarthria) was investigated. After eight weeks with regular stimulation, it was concluded that reaction time changes were individual and consistent for a given subject. Only occasionally were they shortened after stimulation. However, reaction time was lengthened...... in three cases, prohibiting further stimulation in one case. Motion sickness was prohibitive in a second case. However, after-stimulation increase of phonation time and/or vital capacity was found in one patient and four controls. Oral diadochokinetic rates were slowed in several cases. Collectively, when...

  9. Tools for the assessment of childhood apraxia of speech.

    Science.gov (United States)

    Gubiani, Marileda Barichello; Pagliarin, Karina Carlesso; Keske-Soares, Marcia

    2015-01-01

    This study systematically reviews the literature on the main tools used to evaluate childhood apraxia of speech (CAS). The search strategy includes Scopus, PubMed, and Embase databases. Empirical studies that used tools for assessing CAS were selected. Articles were selected by two independent researchers. The search retrieved 695 articles, out of which 12 were included in the study. Five tools were identified: Verbal Motor Production Assessment for Children, Dynamic Evaluation of Motor Speech Skill, The Orofacial Praxis Test, Kaufman Speech Praxis Test for Children, and Madison Speech Assessment Protocol. There are few instruments available for CAS assessment and most of them are intended to assess praxis and/or orofacial movements, sequences of orofacial movements, articulation of syllables and phonemes, spontaneous speech, and prosody. There are some tests for assessment and diagnosis of CAS. However, few studies on this topic have been conducted at the national level, as well as protocols to assess and assist in an accurate diagnosis.

  10. Rural and remote speech-language pathology service inequities: An Australian human rights dilemma.

    Science.gov (United States)

    Jones, Debra M; McAllister, Lindy; Lyle, David M

    2018-02-01

    Access to healthcare is a fundamental human right for all Australians. Article 19 of the Universal Declaration of Human Rights acknowledges the right to freedom of opinion and to seek, receive and impart information and ideas. Capacities for self-expression and effective communication underpin the realisation of these fundamental human rights. For rural and remote Australian children this realisation is compromised by complex disadvantages and inequities that contribute to communication delays, inequity of access to essential speech-language pathology services and poorer later life outcomes. Localised solutions to the provision of civically engaged, accessible, acceptable and sustainable speech-language pathology services within rural and remote Australian contexts are required if we are to make substantive human rights gains. However, civically engaged and sustained healthcare can significantly challenge traditional professionalised perspectives on how best to design and implement speech-language pathology services that seek to address rural and remote communication needs and access inequities. A failure to engage these communities in the identification of childhood communication delays and solutions to address these delays, ultimately denies children, families and communities of their human rights for healthcare access, self-expression, self-dignity and meaningful inclusion within Australian society.

  11. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    Science.gov (United States)

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  13. Profiling Speech and Pausing in Amyotrophic Lateral Sclerosis (ALS and Frontotemporal Dementia (FTD.

    Directory of Open Access Journals (Sweden)

    Yana Yunusova

    Full Text Available This study examines reading aloud in patients with amyotrophic lateral sclerosis (ALS and those with frontotemporal dementia (FTD in order to determine whether differences in patterns of speaking and pausing exist between patients with primary motor vs. primary cognitive-linguistic deficits, and in contrast to healthy controls.136 participants were included in the study: 33 controls, 85 patients with ALS, and 18 patients with either the behavioural variant of FTD (FTD-BV or progressive nonfluent aphasia (FTD-PNFA. Participants with ALS were further divided into 4 non-overlapping subgroups--mild, respiratory, bulbar (with oral-motor deficit and bulbar-respiratory--based on the presence and severity of motor bulbar or respiratory signs. All participants read a passage aloud. Custom-made software was used to perform speech and pause analyses, and this provided measures of speaking and articulatory rates, duration of speech, and number and duration of pauses. These measures were statistically compared in different subgroups of patients.The results revealed clear differences between patient groups and healthy controls on the passage reading task. A speech-based motor function measure (i.e., articulatory rate was able to distinguish patients with bulbar ALS or FTD-PNFA from those with respiratory ALS or FTD-BV. Distinguishing the disordered groups proved challenging based on the pausing measures.This study demonstrated the use of speech measures in the identification of those with an oral-motor deficit, and showed the usefulness of performing a relatively simple reading test to assess speech versus pause behaviors across the ALS-FTD disease continuum. The findings also suggest that motor speech assessment should be performed as part of the diagnostic workup for patients with FTD.

  14. Axon guidance pathways served as common targets for human speech/language evolution and related disorders.

    Science.gov (United States)

    Lei, Huimeng; Yan, Zhangming; Sun, Xiaohong; Zhang, Yue; Wang, Jianhong; Ma, Caihong; Xu, Qunyuan; Wang, Rui; Jarvis, Erich D; Sun, Zhirong

    2017-11-01

    Human and several nonhuman species share the rare ability of modifying acoustic and/or syntactic features of sounds produced, i.e. vocal learning, which is the important neurobiological and behavioral substrate of human speech/language. This convergent trait was suggested to be associated with significant genomic convergence and best manifested at the ROBO-SLIT axon guidance pathway. Here we verified the significance of such genomic convergence and assessed its functional relevance to human speech/language using human genetic variation data. In normal human populations, we found the affected amino acid sites were well fixed and accompanied with significantly more associated protein-coding SNPs in the same genes than the rest genes. Diseased individuals with speech/language disorders have significant more low frequency protein coding SNPs but they preferentially occurred outside the affected genes. Such patients' SNPs were enriched in several functional categories including two axon guidance pathways (mediated by netrin and semaphorin) that interact with ROBO-SLITs. Four of the six patients have homozygous missense SNPs on PRAME gene family, one youngest gene family in human lineage, which possibly acts upon retinoic acid receptor signaling, similarly as FOXP2, to modulate axon guidance. Taken together, we suggest the axon guidance pathways (e.g. ROBO-SLIT, PRAME gene family) served as common targets for human speech/language evolution and related disorders. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. An overview of neural function and feedback control in human communication.

    Science.gov (United States)

    Hood, L J

    1998-01-01

    The speech and hearing mechanisms depend on accurate sensory information and intact feedback mechanisms to facilitate communication. This article provides a brief overview of some components of the nervous system important for human communication and some electrophysiological methods used to measure cortical function in humans. An overview of automatic control and feedback mechanisms in general and as they pertain to the speech motor system and control of the hearing periphery is also presented, along with a discussion of how the speech and auditory systems interact.

  16. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    Directory of Open Access Journals (Sweden)

    Buddhamas eKriengwatana

    2015-08-01

    Full Text Available Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still not fully understood. In this study we compare the ability of humans and zebra finches to categorize vowels despite speaker variation in speech in order to test the hypothesis that accommodating speaker and gender differences in isolated vowels can be achieved without prior experience with speaker-related variability. Using a behavioural Go/No-go task and identical stimuli, we compared Australian English adults’ (naïve to Dutch and zebra finches’ (naïve to human speech ability to categorize /ɪ/ and /ɛ/ vowels of an novel Dutch speaker after learning to discriminate those vowels from only one other speaker. Experiment 1 and 2 presented vowels of two speakers interspersed or blocked, respectively. Results demonstrate that categorization of vowels is possible without prior exposure to speaker-related variability in speech for zebra finches, and in non-native vowel categories for humans. Therefore, this study is the first to provide evidence for what might be a species-shared auditory bias that may supersede speaker-related information during vowel categorization. It additionally provides behavioural evidence contradicting a prior hypothesis that accommodation of speaker differences is achieved via the use of formant ratios. Therefore, investigations of alternative accounts of vowel normalization that incorporate the possibility of an auditory bias for disregarding inter-speaker variability are warranted.

  17. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  18. Electrophysiological and hemodynamic mismatch responses in rats listening to human speech syllables.

    Directory of Open Access Journals (Sweden)

    Mahdi Mahmoudzadeh

    Full Text Available Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG and hemodynamic responses (using fNIRS to series of human speech syllables and investigated the brain response to a change of consonant (ba vs. ga and to a change of voice (male vs. female. Both methods revealed concordant results, although ECoG measures were more sensitive than fNIRS. Responses to syllables were bilateral, but with marked right-hemispheric lateralization. Responses to voice changes were observed with both methods, while only ECoG was sensitive to consonant changes. These results suggest that rats more effectively processed the speech envelope than fine temporal cues in contrast with human preterm neonates, in whom the opposite effects were observed. Cross-species comparisons constitute a very valuable tool to define the singularities of the human brain and species-specific bias that may help human infants to learn their native language.

  19. Characterizing a neurodegenerative syndrome: primary progressive apraxia of speech.

    Science.gov (United States)

    Josephs, Keith A; Duffy, Joseph R; Strand, Edythe A; Machulda, Mary M; Senjem, Matthew L; Master, Ankit V; Lowe, Val J; Jack, Clifford R; Whitwell, Jennifer L

    2012-05-01

    Apraxia of speech is a disorder of speech motor planning and/or programming that is distinguishable from aphasia and dysarthria. It most commonly results from vascular insults but can occur in degenerative diseases where it has typically been subsumed under aphasia, or it occurs in the context of more widespread neurodegeneration. The aim of this study was to determine whether apraxia of speech can present as an isolated sign of neurodegenerative disease. Between July 2010 and July 2011, 37 subjects with a neurodegenerative speech and language disorder were prospectively recruited and underwent detailed speech and language, neurological, neuropsychological and neuroimaging testing. The neuroimaging battery included 3.0 tesla volumetric head magnetic resonance imaging, [(18)F]-fluorodeoxyglucose and [(11)C] Pittsburg compound B positron emission tomography scanning. Twelve subjects were identified as having apraxia of speech without any signs of aphasia based on a comprehensive battery of language tests; hence, none met criteria for primary progressive aphasia. These subjects with primary progressive apraxia of speech included eight females and four males, with a mean age of onset of 73 years (range: 49-82). There were no specific additional shared patterns of neurological or neuropsychological impairment in the subjects with primary progressive apraxia of speech, but there was individual variability. Some subjects, for example, had mild features of behavioural change, executive dysfunction, limb apraxia or Parkinsonism. Voxel-based morphometry of grey matter revealed focal atrophy of superior lateral premotor cortex and supplementary motor area. Voxel-based morphometry of white matter showed volume loss in these same regions but with extension of loss involving the inferior premotor cortex and body of the corpus callosum. These same areas of white matter loss were observed with diffusion tensor imaging analysis, which also demonstrated reduced fractional anisotropy

  20. Subtyping Children with Speech Sound Disorders by Endophenotypes

    Science.gov (United States)

    Lewis, Barbara A.; Avrich, Allison A.; Freebairn, Lisa A.; Taylor, H. Gerry; Iyengar, Sudha K.; Stein, Catherine M.

    2011-01-01

    Purpose: The present study examined associations of 5 endophenotypes (i.e., measurable skills that are closely associated with speech sound disorders and are useful in detecting genetic influences on speech sound production), oral motor skills, phonological memory, phonological awareness, vocabulary, and speeded naming, with 3 clinical criteria…

  1. Acquired apraxia of speech: features, accounts, and treatment.

    Science.gov (United States)

    Peach, Richard K

    2004-01-01

    The features of apraxia of speech (AOS) are presented with regard to both traditional and contemporary descriptions of the disorder. Models of speech processing, including the neurological bases for apraxia of speech, are discussed. Recent findings concerning subcortical contributions to apraxia of speech and the role of the insula are presented. The key features to differentially diagnose AOS from related speech syndromes are identified. Treatment implications derived from motor accounts of AOS are presented along with a summary of current approaches designed to treat the various subcomponents of the disorder. Finally, guidelines are provided for treating the AOS patient with coexisting aphasia.

  2. [Modeling developmental aspects of sensorimotor control of speech production].

    Science.gov (United States)

    Kröger, B J; Birkholz, P; Neuschaefer-Rube, C

    2007-05-01

    Detailed knowledge of the neurophysiology of speech acquisition is important for understanding the developmental aspects of speech perception and production and for understanding developmental disorders of speech perception and production. A computer implemented neural model of sensorimotor control of speech production was developed. The model is capable of demonstrating the neural functions of different cortical areas during speech production in detail. (i) Two sensory and two motor maps or neural representations and the appertaining neural mappings or projections establish the sensorimotor feedback control system. These maps and mappings are already formed and trained during the prelinguistic phase of speech acquisition. (ii) The feedforward sensorimotor control system comprises the lexical map (representations of sounds, syllables, and words of the first language) and the mappings from lexical to sensory and to motor maps. The training of the appertaining mappings form the linguistic phase of speech acquisition. (iii) Three prelinguistic learning phases--i. e. silent mouthing, quasi stationary vocalic articulation, and realisation of articulatory protogestures--can be defined on the basis of our simulation studies using the computational neural model. These learning phases can be associated with temporal phases of prelinguistic speech acquisition obtained from natural data. The neural model illuminates the detailed function of specific cortical areas during speech production. In particular it can be shown that developmental disorders of speech production may result from a delayed or incorrect process within one of the prelinguistic learning phases defined by the neural model.

  3. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  4. Evidence That Bimanual Motor Timing Performance Is Not a Significant Factor in Developmental Stuttering

    Science.gov (United States)

    Hilger, Allison I.; Zelaznik, Howard; Smith, Anne

    2016-01-01

    Purpose: Stuttering involves a breakdown in the speech motor system. We address whether stuttering in its early stage is specific to the speech motor system or whether its impact is observable across motor systems. Method: As an extension of Olander, Smith, and Zelaznik (2010), we measured bimanual motor timing performance in 115 children: 70…

  5. Profiling Speech and Pausing in Amyotrophic Lateral Sclerosis (ALS) and Frontotemporal Dementia (FTD)

    Science.gov (United States)

    Yunusova, Yana; Graham, Naida L.; Shellikeri, Sanjana; Phuong, Kent; Kulkarni, Madhura; Rochon, Elizabeth; Tang-Wai, David F.; Chow, Tiffany W.; Black, Sandra E.; Zinman, Lorne H.; Green, Jordan R.

    2016-01-01

    Objective This study examines reading aloud in patients with amyotrophic lateral sclerosis (ALS) and those with frontotemporal dementia (FTD) in order to determine whether differences in patterns of speaking and pausing exist between patients with primary motor vs. primary cognitive-linguistic deficits, and in contrast to healthy controls. Design 136 participants were included in the study: 33 controls, 85 patients with ALS, and 18 patients with either the behavioural variant of FTD (FTD-BV) or progressive nonfluent aphasia (FTD-PNFA). Participants with ALS were further divided into 4 non-overlapping subgroups—mild, respiratory, bulbar (with oral-motor deficit) and bulbar-respiratory—based on the presence and severity of motor bulbar or respiratory signs. All participants read a passage aloud. Custom-made software was used to perform speech and pause analyses, and this provided measures of speaking and articulatory rates, duration of speech, and number and duration of pauses. These measures were statistically compared in different subgroups of patients. Results The results revealed clear differences between patient groups and healthy controls on the passage reading task. A speech-based motor function measure (i.e., articulatory rate) was able to distinguish patients with bulbar ALS or FTD-PNFA from those with respiratory ALS or FTD-BV. Distinguishing the disordered groups proved challenging based on the pausing measures. Conclusions and Relevance This study demonstrated the use of speech measures in the identification of those with an oral-motor deficit, and showed the usefulness of performing a relatively simple reading test to assess speech versus pause behaviors across the ALS—FTD disease continuum. The findings also suggest that motor speech assessment should be performed as part of the diagnostic workup for patients with FTD. PMID:26789001

  6. Broca’s Area as a Pre-articulatory Phonetic Encoder: Gating the Motor Program

    Directory of Open Access Journals (Sweden)

    Valentina Ferpozzi

    2018-02-01

    Full Text Available The exact nature of the role of Broca’s area in control of speech and whether it is exerted at the cognitive or at the motor level is still debated. Intraoperative evidence of a lack of motor responses to direct electrical stimulation (DES of Broca’s area and the observation that its stimulation induces a “speech arrest” without an apparent effect on the ongoing activity of phono-articulatory muscles, raises the argument. Essentially, attribution of direct involvement of Broca’s area in motor control of speech, requires evidence of a functional connection of this area with the phono-articulatory muscles’ motoneurons. With a quantitative approach we investigated, in 20 patients undergoing surgery for brain tumors, whether DES delivered on Broca’s area affects the recruitment of the phono-articulatory muscles’ motor units. The electromyography (EMG of the muscles active during two speech tasks (object picture naming and counting was recorded during and in absence of DES on Broca’s area. Offline, the EMG of each muscle was analyzed in frequency (power spectrum, PS and time domain (root mean square, RMS and the two conditions compared. Results show that DES on Broca’s area induces an intensity-dependent “speech arrest.” The intensity of DES needed to induce “speech arrest” when applied on Broca’s area was higher when compared to the intensity effective on the neighboring pre-motor/motor cortices. Notably, PS and RMS measured on the EMG recorded during “speech arrest” were superimposable to those recorded at baseline. Partial interruptions of speech were not observed. Speech arrest was an “all-or-none” effect: muscle activation started only by removing DES, as if DES prevented speech onset. The same effect was observed when stimulating directly the subcortical fibers running below Broca’s area. Intraoperative data point to Broca’s area as a functional gate authorizing the phonetic translation to be executed

  7. The Relationship Between Apraxia of Speech and Oral Apraxia: Association or Dissociation?

    Science.gov (United States)

    Whiteside, Sandra P; Dyson, Lucy; Cowell, Patricia E; Varley, Rosemary A

    2015-11-01

    Acquired apraxia of speech (AOS) is a motor speech disorder that affects the implementation of articulatory gestures and the fluency and intelligibility of speech. Oral apraxia (OA) is an impairment of nonspeech volitional movement. Although many speakers with AOS also display difficulties with volitional nonspeech oral movements, the relationship between the 2 conditions is unclear. This study explored the relationship between speech and volitional nonspeech oral movement impairment in a sample of 50 participants with AOS. We examined levels of association and dissociation between speech and OA using a battery of nonspeech oromotor, speech, and auditory/aphasia tasks. There was evidence of a moderate positive association between the 2 impairments across participants. However, individual profiles revealed patterns of dissociation between the 2 in a few cases, with evidence of double dissociation of speech and oral apraxic impairment. We discuss the implications of these relationships for models of oral motor and speech control. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  9. Cognitive Flexibility in Children with and without Speech Disorder

    Science.gov (United States)

    Crosbie, Sharon; Holm, Alison; Dodd, Barbara

    2009-01-01

    Most children's speech difficulties are "functional" (i.e. no known sensory, motor or intellectual deficits). Speech disorder may, however, be associated with cognitive deficits considered core abilities in executive function: rule abstraction and cognitive flexibility. The study compares the rule abstraction and cognitive flexibility of…

  10. Auditory feedback perturbation in children with developmental speech disorders

    NARCIS (Netherlands)

    Terband, H.R.; van Brenk, F.J.; van Doornik-van der Zee, J.C.

    2014-01-01

    Background/purpose: Several studies indicate a close relation between auditory and speech motor functions in children with speech sound disorders (SSD). The aim of this study was to investigate the ability to compensate and adapt for perturbed auditory feedback in children with SSD compared to

  11. Predicting clinical decline in progressive agrammatic aphasia and apraxia of speech.

    Science.gov (United States)

    Whitwell, Jennifer L; Weigand, Stephen D; Duffy, Joseph R; Clark, Heather M; Strand, Edythe A; Machulda, Mary M; Spychalla, Anthony J; Senjem, Matthew L; Jack, Clifford R; Josephs, Keith A

    2017-11-28

    To determine whether baseline clinical and MRI features predict rate of clinical decline in patients with progressive apraxia of speech (AOS). Thirty-four patients with progressive AOS, with AOS either in isolation or in the presence of agrammatic aphasia, were followed up longitudinally for up to 4 visits, with clinical testing and MRI at each visit. Linear mixed-effects regression models including all visits (n = 94) were used to assess baseline clinical and MRI variables that predict rate of worsening of aphasia, motor speech, parkinsonism, and behavior. Clinical predictors included baseline severity and AOS type. MRI predictors included baseline frontal, premotor, motor, and striatal gray matter volumes. More severe parkinsonism at baseline was associated with faster rate of decline in parkinsonism. Patients with predominant sound distortions (AOS type 1) showed faster rates of decline in aphasia and motor speech, while patients with segmented speech (AOS type 2) showed faster rates of decline in parkinsonism. On MRI, we observed trends for fastest rates of decline in aphasia in patients with relatively small left, but preserved right, Broca area and precentral cortex. Bilateral reductions in lateral premotor cortex were associated with faster rates of decline of behavior. No associations were observed between volumes and decline in motor speech or parkinsonism. Rate of decline of each of the 4 clinical features assessed was associated with different baseline clinical and regional MRI predictors. Our findings could help improve prognostic estimates for these patients. © 2017 American Academy of Neurology.

  12. On the context-dependent nature of the contribution of the ventral premotor cortex to speech perception

    Science.gov (United States)

    Tremblay, Pascale; Small, Steven L.

    2011-01-01

    What is the nature of the interface between speech perception and production, where auditory and motor representations converge? One set of explanations suggests that during perception, the motor circuits involved in producing a perceived action are in some way enacting the action without actually causing movement (covert simulation) or sending along the motor information to be used to predict its sensory consequences (i.e., efference copy). Other accounts either reject entirely the involvement of motor representations in perception, or explain their role as being more supportive than integral, and not employing the identical circuits used in production. Using fMRI, we investigated whether there are brain regions that are conjointly active for both speech perception and production, and whether these regions are sensitive to articulatory (syllabic) complexity during both processes, which is predicted by a covert simulation account. A group of healthy young adults (1) observed a female speaker produce a set of familiar words (perception), and (2) observed and then repeated the words (production). There were two types of words, varying in articulatory complexity, as measured by the presence or absence of consonant clusters. The simple words contained no consonant cluster (e.g. “palace”), while the complex words contained one to three consonant clusters (e.g. “planet”). Results indicate that the left ventral premotor cortex (PMv) was significantly active during speech perception and speech production but that activation in this region was scaled to articulatory complexity only during speech production, revealing an incompletely specified efferent motor signal during speech perception. The right planum temporal (PT) was also active during speech perception and speech production, and activation in this region was scaled to articulatory complexity during both production and perception. These findings are discussed in the context of current theories theory of

  13. An exploratory study on the driving method of speech synthesis based on the human eye reading imaging data

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2016-10-01

    With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.

  14. Effect of Deep Brain Stimulation on Speech Performance in Parkinson's Disease

    OpenAIRE

    Skodda, Sabine

    2012-01-01

    Deep brain stimulation (DBS) has been reported to be successful in relieving the core motor symptoms of Parkinson's disease (PD) and motor fluctuations in the more advanced stages of the disease. However, data on the effects of DBS on speech performance are inconsistent. While there are some series of patients documenting that speech function was relatively unaffected by DBS of the nucleus subthalamicus (STN), other investigators reported on improvements of distinct parameters of oral control...

  15. Does Speech Emerge From Earlier Appearing Oral Motor Behaviors?

    OpenAIRE

    Moore, Christopher A.; Ruark, Jacki L.

    1996-01-01

    This investigation was designed to quantify the coordinative organization of mandibular muscles in toddlers during speech and nonspeech behaviors. Seven 15-month-olds were observed during spontaneous production of chewing, sucking, babbling, and speech. Comparison of mandibular coordination across these behaviors revealed that, even for children in the earliest stages of true word production, coordination was quite different from that observed for other behaviors. Production of true words was...

  16. Speech and language pathology & pediatric HIV.

    Science.gov (United States)

    Retzlaff, C

    1999-12-01

    Children with HIV have critical speech and language issues because the virus manifests itself primarily in the developing central nervous system, sometimes causing speech, motor control, and language disabilities. Language impediments that develop during the second year of life seem to be especially severe. HIV-infected children are also susceptible to recurrent ear infections, which can damage hearing. Developmental issues must be addressed for these children to reach their full potential. A decline in language skills may coincide with or precede other losses in cognitive ability. A speech pathologist can play an important role on a pediatric HIV team. References are included.

  17. Speech-Like and Non-Speech Lip Kinematics and Coordination in Aphasia

    Science.gov (United States)

    Bose, Arpita; van Lieshout, Pascal

    2012-01-01

    Background: In addition to the well-known linguistic processing impairments in aphasia, oro-motor skills and articulatory implementation of speech segments are reported to be compromised to some degree in most types of aphasia. Aims: This study aimed to identify differences in the characteristics and coordination of lip movements in the production…

  18. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis.

    Science.gov (United States)

    Evans, Samuel; Davis, Matthew H

    2015-12-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.

  19. Sensorimotor Representation of Speech Perception. Cross-Decoding of Place of Articulation Features during Selective Attention to Syllables in 7T fMRI

    NARCIS (Netherlands)

    Archila-Meléndez, Mario E.; Valente, Giancarlo; Correia, Joao M.; Rouhl, Rob P. W.; van Kranen-Mastenbroek, Vivianne H.; Jansma, Bernadette M.

    2018-01-01

    Sensorimotor integration, the translation between acoustic signals and motoric programs, may constitute a crucial mechanism for speech. During speech perception, the acoustic-motoric translations include the recruitment of cortical areas for the representation of speech articulatory features, such

  20. Clinical and Anatomical Correlates of Apraxia of Speech

    Science.gov (United States)

    Ogar, Jennifer; Willock, Sharon; Baldo, Juliana; Wilkins, David; Ludy, Carl; Dronkers, Nina

    2006-01-01

    In a previous study (Dronkers, 1996), stroke patients identified as having apraxia of speech (AOS), an articulatory disorder, were found to have damage to the left superior precentral gyrus of the insula (SPGI). The present study sought (1) to characterize the performance of patients with AOS on a classic motor speech evaluation, and (2) to…

  1. Speech sound disorder at 4 years: prevalence, comorbidities, and predictors in a community cohort of children.

    Science.gov (United States)

    Eadie, Patricia; Morgan, Angela; Ukoumunne, Obioha C; Ttofari Eecen, Kyriaki; Wake, Melissa; Reilly, Sheena

    2015-06-01

    The epidemiology of preschool speech sound disorder is poorly understood. Our aims were to determine: the prevalence of idiopathic speech sound disorder; the comorbidity of speech sound disorder with language and pre-literacy difficulties; and the factors contributing to speech outcome at 4 years. One thousand four hundred and ninety-four participants from an Australian longitudinal cohort completed speech, language, and pre-literacy assessments at 4 years. Prevalence of speech sound disorder (SSD) was defined by standard score performance of ≤79 on a speech assessment. Logistic regression examined predictors of SSD within four domains: child and family; parent-reported speech; cognitive-linguistic; and parent-reported motor skills. At 4 years the prevalence of speech disorder in an Australian cohort was 3.4%. Comorbidity with SSD was 40.8% for language disorder and 20.8% for poor pre-literacy skills. Sex, maternal vocabulary, socio-economic status, and family history of speech and language difficulties predicted SSD, as did 2-year speech, language, and motor skills. Together these variables provided good discrimination of SSD (area under the curve=0.78). This is the first epidemiological study to demonstrate prevalence of SSD at 4 years of age that was consistent with previous clinical studies. Early detection of SSD at 4 years should focus on family variables and speech, language, and motor skills measured at 2 years. © 2014 Mac Keith Press.

  2. Influences on infant speech processing: toward a new synthesis.

    Science.gov (United States)

    Werker, J F; Tees, R C

    1999-01-01

    To comprehend and produce language, we must be able to recognize the sound patterns of our language and the rules for how these sounds "map on" to meaning. Human infants are born with a remarkable array of perceptual sensitivities that allow them to detect the basic properties that are common to the world's languages. During the first year of life, these sensitivities undergo modification reflecting an exquisite tuning to just that phonological information that is needed to map sound to meaning in the native language. We review this transition from language-general to language-specific perceptual sensitivity that occurs during the first year of life and consider whether the changes propel the child into word learning. To account for the broad-based initial sensitivities and subsequent reorganizations, we offer an integrated transactional framework based on the notion of a specialized perceptual-motor system that has evolved to serve human speech, but which functions in concert with other developing abilities. In so doing, we highlight the links between infant speech perception, babbling, and word learning.

  3. Bridging the Gap Between Speech and Language: Using Multimodal Treatment in a Child With Apraxia.

    Science.gov (United States)

    Tierney, Cheryl D; Pitterle, Kathleen; Kurtz, Marie; Nakhla, Mark; Todorow, Carlyn

    2016-09-01

    Childhood apraxia of speech is a neurologic speech sound disorder in which children have difficulty constructing words and sounds due to poor motor planning and coordination of the articulators required for speech sound production. We report the case of a 3-year-old boy strongly suspected to have childhood apraxia of speech at 18 months of age who used multimodal communication to facilitate language development throughout his work with a speech language pathologist. In 18 months of an intensive structured program, he exhibited atypical rapid improvement, progressing from having no intelligible speech to achieving age-appropriate articulation. We suspect that early introduction of sign language by family proved to be a highly effective form of language development, that when coupled with intensive oro-motor and speech sound therapy, resulted in rapid resolution of symptoms. Copyright © 2016 by the American Academy of Pediatrics.

  4. Dysarthric Bengali speech: A neurolinguistic study

    Directory of Open Access Journals (Sweden)

    Chakraborty N

    2008-01-01

    Full Text Available Background and Aims: Dysarthria affects linguistic domains such as respiration, phonation, articulation, resonance and prosody due to upper motor neuron, lower motor neuron, cerebellar or extrapyramidal tract lesions. Although Bengali is one of the major languages globally, dysarthric Bengali speech has not been subjected to neurolinguistic analysis. We attempted such an analysis with the goal of identifying the speech defects in native Bengali speakers in various types of dysarthria encountered in neurological disorders. Settings and Design: A cross-sectional observational study was conducted with 66 dysarthric subjects, predominantly middle-aged males, attending the Neuromedicine OPD of a tertiary care teaching hospital in Kolkata. Materials and Methods: After neurological examination, an instrument comprising commonly used Bengali words and a text block covering all Bengali vowels and consonants were used to carry out perceptual analysis of dysarthric speech. From recorded speech, 24 parameters pertaining to five linguistic domains were assessed. The Kruskal-Wallis analysis of variance, Chi-square test and Fisher′s exact test were used for analysis. Results: The dysarthria types were spastic (15 subjects, flaccid (10, mixed (12, hypokinetic (12, hyperkinetic (9 and ataxic (8. Of the 24 parameters assessed, 15 were found to occur in one or more types with a prevalence of at least 25%. Imprecise consonant was the most frequently occurring defect in most dysarthrias. The spectrum of defects in each type was identified. Some parameters were capable of distinguishing between types. Conclusions: This perceptual analysis has defined linguistic defects likely to be encountered in dysarthric Bengali speech in neurological disorders. The speech distortion can be described and distinguished by a limited number of parameters. This may be of importance to the speech therapist and neurologist in planning rehabilitation and further management.

  5. Speech processing system demonstrated by positron emission tomography (PET). A review of the literature

    International Nuclear Information System (INIS)

    Hirano, Shigeru; Naito, Yasushi; Kojima, Hisayoshi

    1996-01-01

    We review the literature on speech processing in the central nervous system as demonstrated by positron emission tomography (PET). Activation study using PET has been proved to be a useful and non-invasive method of investigating the speech processing system in normal subjects. In speech recognition, the auditory association areas and lexico-semantic areas called Wernicke's area play important roles. Broca's area, motor areas, supplementary motor cortices and the prefrontal area have been proved to be related to speech output. Visual speech stimulation activates not only the visual association areas but also the temporal region and prefrontal area, especially in lexico-semantic processing. Higher level speech processing, such as conversation which includes auditory processing, vocalization and thinking, activates broad areas in both hemispheres. This paper also discusses problems to be resolved in the future. (author) 42 refs

  6. Toward a Quantitative Basis for Assessment and Diagnosis of Apraxia of Speech

    Science.gov (United States)

    Haley, Katarina L.; Jacks, Adam; de Riesthal, Michael; Abou-Khalil, Rima; Roth, Heidi L.

    2012-01-01

    Purpose: We explored the reliability and validity of 2 quantitative approaches to document presence and severity of speech properties associated with apraxia of speech (AOS). Method: A motor speech evaluation was administered to 39 individuals with aphasia. Audio-recordings of the evaluation were presented to 3 experienced clinicians to determine…

  7. Magneto encephalography (MEG: perspectives of speech areas functional mapping in human subjects

    Directory of Open Access Journals (Sweden)

    Butorina A. V.

    2012-06-01

    Full Text Available One of the main problems in clinical practice and academic research is how to localize speech zones in the human brain. Two speech areas (Broca and Wernicke areas that are responsible for language production and for understanding of written and spoken language have been known since the past century. Their location and even hemispheric lateralization have a substantial inter-individual variability, especially in neurosurgery patients. Wada test is one of the most frequently used invasive methodology for speech hemispheric lateralization in neurosurgery patients. However, besides relatively high-risk of Wada test for patient's health, it has its own limitation, e. g. low reliability of Wada-based evidence of verbal memory brain lateralization. Therefore, there is an urgent need for non-invasive, reliable methods of speech zones mapping.The current review summarizes the recent experimental evidence from magnitoencephalographic (MEG research suggesting that speech areas are included in the speech processing within the first 200 ms after the word onset. The electro-magnetic response to deviant word, mismatch negativity wave with latency of 100—200 ms, can be recorded from auditory cortex within the oddball-paradigm. We provide the arguments that basic features of this brain response, such as its automatic, pre-attentive nature, high signal to noise ratio, source localization at superior temporal sulcus, make it a promising vehicle for non-invasive MEG-based speech areas mapping in neurosurgery.

  8. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  9. Toward a Model of Pediatric Speech Sound Disorders (SSD) for Differential Diagnosis and Therapy Planning

    NARCIS (Netherlands)

    Terband, Hayo; Maassen, Bernardus; Maas, Edwin; van Lieshout, Pascal; Maassen, Ben; Terband, Hayo

    2016-01-01

    The classification and differentiation of pediatric speech sound disorders (SSD) is one of the main questions in the field of speech- and language pathology. Terms for classifying childhood and SSD and motor speech disorders (MSD) refer to speech production processes, and a variety of methods of

  10. Identifying Residual Speech Sound Disorders in Bilingual Children: A Japanese-English Case Study

    Science.gov (United States)

    Preston, Jonathan L.; Seki, Ayumi

    2011-01-01

    Purpose: To describe (a) the assessment of residual speech sound disorders (SSDs) in bilinguals by distinguishing speech patterns associated with second language acquisition from patterns associated with misarticulations and (b) how assessment of domains such as speech motor control and phonological awareness can provide a more complete…

  11. The Importance of Production Frequency in Therapy for Childhood Apraxia of Speech

    Science.gov (United States)

    Edeal, Denice Michelle; Gildersleeve-Neumann, Christina Elke

    2011-01-01

    Purpose: This study explores the importance of production frequency during speech therapy to determine whether more practice of speech targets leads to increased performance within a treatment session, as well as to motor learning, in the form of generalization to untrained words. Method: Two children with childhood apraxia of speech were treated…

  12. Communication as a human right: Citizenship, politics and the role of the speech-language pathologist.

    Science.gov (United States)

    Murphy, Declan; Lyons, Rena; Carroll, Clare; Caulfield, Mari; De Paor, Gráinne

    2018-02-01

    According to Article 19 of the Universal Declaration on Human Rights "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." The purpose of this paper is to elucidate communication as a human right in the life of a young man called Declan who has Down syndrome. This commentary paper is co-written by Declan, his sister who is a speech-language pathologist (SLP) with an advocacy role, his SLP, and academics. Declan discusses, in his own words, what makes communication hard, what helps communication, his experiences of speech-language pathology, and what he knows about human rights. He also discusses his passion for politics, his right to be an active citizen and participate in the political process. This paper also focuses on the role of speech-language pathology in supporting and partnering with people with communication disabilities to have their voices heard and exercise their human rights.

  13. Haptic Human-Human Interaction Through a Compliant Connection Does Not Improve Motor Learning in a Force Field

    NARCIS (Netherlands)

    Beckers, Niek; Keemink, Arvid; van Asseldonk, Edwin; van der Kooij, Herman; Prattichizzo, Domenico; Shinoda, Hiroyuki; Tan, Hong Z.; Ruffaldi, Emanuele; Frisoli, Antonio

    2018-01-01

    Humans have a natural ability to haptically interact with other humans, for instance during physically assisting a child to learn how to ride a bicycle. A recent study has shown that haptic human-human interaction can improve individual motor performance and motor learning rate while learning to

  14. Syllable Frequency and Syllable Structure in Apraxia of Speech

    Science.gov (United States)

    Aichert, Ingrid; Ziegler, Wolfram

    2004-01-01

    Recent accounts of the pathomechanism underlying apraxia of speech (AOS) were based on the speech production model of Levelt, Roelofs, and Meyer, and Meyer (1999)1999. The apraxic impairment was localized to the phonetic encoding level where the model postulates a mental store of motor programs for high-frequency syllables. Varley and Whiteside…

  15. Pure apraxia of speech due to infarct in premotor cortex.

    Science.gov (United States)

    Patira, Riddhi; Ciniglia, Lauren; Calvert, Timothy; Altschuler, Eric L

    Apraxia of speech (AOS) is now recognized as an articulation disorder distinct from dysarthria and aphasia. Various lesions have been associated with AOS in studies that are limited in precise localization due to variability in size and type of pathology. We present a case of pure AOS in setting of an acute stroke to localize more precisely than ever before the brain area responsible for AOS, dorsal premotor cortex (dPMC). The dPMC is in unique position to plan and coordinate speech production by virtue of its connection with nearby motor cortex harboring corticobulbar tract, supplementary motor area, inferior frontal operculum, and temporo-parietal area via the dorsal stream of dual-stream model of speech processing. The role of dPMC is further supported as part of dorsal stream in the dual-stream model of speech processing as well as controller in the hierarchical state feedback control model. Copyright © 2017 Polish Neurological Society. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  16. Effects of a Conversation-Based Intervention on the Linguistic Skills of Children With Motor Speech Disorders Who Use Augmentative and Alternative Communication.

    Science.gov (United States)

    Soto, Gloria; Clarke, Michael T

    2017-07-12

    This study was conducted to evaluate the effects of a conversation-based intervention on the expressive vocabulary and grammatical skills of children with severe motor speech disorders and expressive language delay who use augmentative and alternative communication. Eight children aged from 8 to 13 years participated in the study. After a baseline period, a conversation-based intervention was provided for each participant, in which they were supported to learn and use linguistic structures essential for the formation of clauses and the grammaticalization of their utterances, such as pronouns, verbs, and bound morphemes, in the context of personally meaningful and scaffolded conversations with trained clinicians. The conversations were videotaped, transcribed, and analyzed using the Systematic Analysis of Language Transcripts (SALT; Miller & Chapman, 1991). Results indicate that participants showed improvements in their use of spontaneous clauses, and a greater use of pronouns, verbs, and bound morphemes. These improvements were sustained and generalized to conversations with familiar partners. The results demonstrate the positive effects of the conversation-based intervention for improving the expressive vocabulary and grammatical skills of children with severe motor speech disorders and expressive language delay who use augmentative and alternative communication. Clinical and theoretical implications of conversation-based interventions are discussed and future research needs are identified. https://doi.org/10.23641/asha.5150113.

  17. A Foxp2 mutation implicated in human speech deficits alters sequencing of ultrasonic vocalizations in adult male mice

    Directory of Open Access Journals (Sweden)

    Jonathan Chabout

    2016-10-01

    Full Text Available Development of proficient spoken language skills is disrupted by mutations of the FOXP2 transcription factor. A heterozygous missense mutation in the KE family causes speech apraxia, involving difficulty producing words with complex learned sequences of syllables. Manipulations in songbirds have helped to elucidate the role of this gene in vocal learning, but findings in non-human mammals have been limited or inconclusive. Here we performed a systematic study of ultrasonic vocalizations (USVs of adult male mice carrying the KE family mutation. Using novel statistical tools, we found that Foxp2 heterozygous mice did not have detectable changes in USV syllable acoustic structure, but produced shorter sequences and did not shift to more complex syntax in social contexts where wildtype animals did. Heterozygous mice also displayed a shift in the position of their rudimentary laryngeal motor cortex layer-5 neurons. Our findings indicate that although mouse USVs are mostly innate, the underlying contributions of FoxP2 to sequencing of vocalizations are conserved with humans.

  18. [Transcortical aphasia and echolalia; problems of speech initiative].

    Science.gov (United States)

    Környey, E

    1975-05-01

    Transcortical aphasia accompanied by echolalia occurs with malacias involving the postero-median part of the frontal lobe which includes the supplementary motor field of Penfield and is nourished by the anterior cerebral artery. The syndrome manifests itself in such cases even in fine detials in the same form as does in Pick's atrophy. The same also holds true for cases in which a tumour involves the region mentioned. Sentences or fragments of sentences are echolalised; tendency to perseveration is very marked. It is hardly, if at all, possible to evaluate the verbal understanding of these patients. Analysis of their behaviour supports the assumption that they have not lost the adaptation to some situations. Echolalia is often associated with forced grasping and other compulsory phenomena. Therefore, it may be interpreted as a sign of disinhibition of the acusticomotor reflex present during the development of the speech. Competition between the intentionality and the appearance of compulsory phenomena greatly depends on the general condition of the patient, particularly on the clarity of consciousness. The integrity of the postero-median part of the frontal lobe is indespensable for a normal reaction by speech to stimuli received from the sensory areas. The influence of the supplementary motor field on speech intention seems to be linked to the dominant hemisphere. In case lesions of the territory of the anterior cerebral artery and the cortico-bulbar neuron system are coexisting in the dominant hemisphere, the speech disturbance shifts to complete motor aphasia. In such cases the pathomechanism is analogous to that of the syndrome of Liepmann, i.e., right-sided hemiparesis with left-sided apraxia. So-called transcortical motor aphasia without echolalia can be caused by loss of stimuli from the sensory fields.

  19. Sensorimotor speech disorders in Parkinson's disease: Programming and execution deficits

    Directory of Open Access Journals (Sweden)

    Karin Zazo Ortiz

    Full Text Available ABSTRACT Introduction: Dysfunction in the basal ganglia circuits is a determining factor in the physiopathology of the classic signs of Parkinson's disease (PD and hypokinetic dysarthria is commonly related to PD. Regarding speech disorders associated with PD, the latest four-level framework of speech complicates the traditional view of dysarthria as a motor execution disorder. Based on findings that dysfunctions in basal ganglia can cause speech disorders, and on the premise that the speech deficits seen in PD are not related to an execution motor disorder alone but also to a disorder at the motor programming level, the main objective of this study was to investigate the presence of sensorimotor disorders of programming (besides the execution disorders previously described in PD patients. Methods: A cross-sectional study was conducted in a sample of 60 adults matched for gender, age and education: 30 adult patients diagnosed with idiopathic PD (PDG and 30 healthy adults (CG. All types of articulation errors were reanalyzed to investigate the nature of these errors. Interjections, hesitations and repetitions of words or sentences (during discourse were considered typical disfluencies; blocking, episodes of palilalia (words or syllables were analyzed as atypical disfluencies. We analysed features including successive self-initiated trial, phoneme distortions, self-correction, repetition of sounds and syllables, prolonged movement transitions, additions or omissions of sounds and syllables, in order to identify programming and/or execution failures. Orofacial agility was also investigated. Results: The PDG had worse performance on all sensorimotor speech tasks. All PD patients had hypokinetic dysarthria. Conclusion: The clinical characteristics found suggest both execution and programming sensorimotor speech disorders in PD patients.

  20. Multi-function robots with speech interaction and emotion feedback

    Science.gov (United States)

    Wang, Hongyu; Lou, Guanting; Ma, Mengchao

    2018-03-01

    Nowadays, the service robots have been applied in many public circumstances; however, most of them still don’t have the function of speech interaction, especially the function of speech-emotion interaction feedback. To make the robot more humanoid, Arduino microcontroller was used in this study for the speech recognition module and servo motor control module to achieve the functions of the robot’s speech interaction and emotion feedback. In addition, W5100 was adopted for network connection to achieve information transmission via Internet, providing broad application prospects for the robot in the area of Internet of Things (IoT).

  1. Oral and Hand Movement Speeds Are Associated with Expressive Language Ability in Children with Speech Sound Disorder

    Science.gov (United States)

    Peter, Beate

    2012-01-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…

  2. Comment on "Monkey vocal tracts are speech-ready".

    Science.gov (United States)

    Lieberman, Philip

    2017-07-01

    Monkey vocal tracts are capable of producing monkey speech, not the full range of articulate human speech. The evolution of human speech entailed both anatomy and brains. Fitch, de Boer, Mathur, and Ghazanfar in Science Advances claim that "monkey vocal tracts are speech-ready," and conclude that "…the evolution of human speech capabilities required neural change rather than modifications of vocal anatomy." Neither premise is consistent either with the data presented and the conclusions reached by de Boer and Fitch themselves in their own published papers on the role of anatomy in the evolution of human speech or with the body of independent studies published since the 1950s.

  3. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  4. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  5. Structural brain aging and speech production: a surface-based brain morphometry study.

    Science.gov (United States)

    Tremblay, Pascale; Deschamps, Isabelle

    2016-07-01

    While there has been a growing number of studies examining the neurofunctional correlates of speech production over the past decade, the neurostructural correlates of this immensely important human behaviour remain less well understood, despite the fact that previous studies have established links between brain structure and behaviour, including speech and language. In the present study, we thus examined, for the first time, the relationship between surface-based cortical thickness (CT) and three different behavioural indexes of sublexical speech production: response duration, reaction times and articulatory accuracy, in healthy young and older adults during the production of simple and complex meaningless sequences of syllables (e.g., /pa-pa-pa/ vs. /pa-ta-ka/). The results show that each behavioural speech measure was sensitive to the complexity of the sequences, as indicated by slower reaction times, longer response durations and decreased articulatory accuracy in both groups for the complex sequences. Older adults produced longer speech responses, particularly during the production of complex sequence. Unique age-independent and age-dependent relationships between brain structure and each of these behavioural measures were found in several cortical and subcortical regions known for their involvement in speech production, including the bilateral anterior insula, the left primary motor area, the rostral supramarginal gyrus, the right inferior frontal sulcus, the bilateral putamen and caudate, and in some region less typically associated with speech production, such as the posterior cingulate cortex.

  6. Atypical speech lateralization in adults with developmental coordination disorder demonstrated using functional transcranial Doppler ultrasound.

    Science.gov (United States)

    Hodgson, Jessica C; Hudson, John M

    2017-03-01

    Research using clinical populations to explore the relationship between hemispheric speech lateralization and handedness has focused on individuals with speech and language disorders, such as dyslexia or specific language impairment (SLI). Such work reveals atypical patterns of cerebral lateralization and handedness in these groups compared to controls. There are few studies that examine this relationship in people with motor coordination impairments but without speech or reading deficits, which is a surprising omission given the prevalence of theories suggesting a common neural network underlying both functions. We use an emerging imaging technique in cognitive neuroscience; functional transcranial Doppler (fTCD) ultrasound, to assess whether individuals with developmental coordination disorder (DCD) display reduced left-hemisphere lateralization for speech production compared to control participants. Twelve adult control participants and 12 adults with DCD, but no other developmental/cognitive impairments, performed a word-generation task whilst undergoing fTCD imaging to establish a hemispheric lateralization index for speech production. All participants also completed an electronic peg-moving task to determine hand skill. As predicted, the DCD group showed a significantly reduced left lateralization pattern for the speech production task compared to controls. Performance on the motor skill task showed a clear preference for the dominant hand across both groups; however, the DCD group mean movement times were significantly higher for the non-dominant hand. This is the first study of its kind to assess hand skill and speech lateralization in DCD. The results reveal a reduced leftwards asymmetry for speech and a slower motor performance. This fits alongside previous work showing atypical cerebral lateralization in DCD for other cognitive processes (e.g., executive function and short-term memory) and thus speaks to debates on theories of the links between motor

  7. Postlingual deaf speech and the role of audition in speech production: comments on Waldstein's paper [R.S. Waldstein, J. Acoust. Soc. Am. 88, 2099-2114 (1990)].

    Science.gov (United States)

    Sapir, S; Canter, G J

    1991-09-01

    Using acoustic analysis techniques, Waldstein [J. Acoust. Soc. Am. 88, 2099-2114 (1990] reported abnormal speech findings in postlingual deaf speakers. She interpreted her findings to suggest that auditory feedback is important in motor speech control. However, it is argued here that Waldstein's interpretation may be unwarranted without addressing the possibility of neurologic deficits (e.g., dysarthria) as confounding (or even primary) causes of the abnormal speech in her subjects.

  8. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  9. Imitation and speech: commonalities within Broca's area.

    Science.gov (United States)

    Kühn, Simone; Brass, Marcel; Gallinat, Jürgen

    2013-11-01

    The so-called embodiment of communication has attracted considerable interest. Recently a growing number of studies have proposed a link between Broca's area's involvement in action processing and its involvement in speech. The present quantitative meta-analysis set out to test whether neuroimaging studies on imitation and overt speech show overlap within inferior frontal gyrus. By means of activation likelihood estimation (ALE), we investigated concurrence of brain regions activated by object-free hand imitation studies as well as overt speech studies including simple syllable and more complex word production. We found direct overlap between imitation and speech in bilateral pars opercularis (BA 44) within Broca's area. Subtraction analyses revealed no unique localization neither for speech nor for imitation. To verify the potential of ALE subtraction analysis to detect unique involvement within Broca's area, we contrasted the results of a meta-analysis on motor inhibition and imitation and found separable regions involved for imitation. This is the first meta-analysis to compare the neural correlates of imitation and overt speech. The results are in line with the proposed evolutionary roots of speech in imitation.

  10. Developmental apraxia of speech : deficits in phonetic planning and motor programming

    NARCIS (Netherlands)

    Nijland, Lian

    2003-01-01

    The speech of children with developmental apraxia of speech (DAS) is highly unintelligible due to many nonsystematic sound substitutions and distortions. There is ongoing debate about the underlying deficit of the disorder. The ultimate goal of this thesis was to answer this question within the

  11. Computational neural modeling of speech motor control in childhood apraxia of speech (CAS).

    NARCIS (Netherlands)

    Terband, H.R.; Maassen, B.A.M.; Guenther, F.H.; Brumberg, J.

    2009-01-01

    PURPOSE: Childhood apraxia of speech (CAS) has been associated with a wide variety of diagnostic descriptions and has been shown to involve different symptoms during successive stages of development. In the present study, the authors attempted to associate the symptoms of CAS in a particular

  12. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  13. Aerodynamic Indices of Velopharyngeal Function in Childhood Apraxia of Speech

    Science.gov (United States)

    Sealey, Linda R.; Giddens, Cheryl L.

    2010-01-01

    Childhood apraxia of speech (CAS) is characterized as a deficit in the motor processes of speech for the volitional control of the articulators, including the velum. One of the many characteristics attributed to children with CAS is intermittent or inconsistent hypernasality. The purpose of this study was to document differences in velopharyngeal…

  14. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Verbal Short-Term Memory Span in Speech-Disordered Children: Implications for Articulatory Coding in Short-Term Memory.

    Science.gov (United States)

    Raine, Adrian; And Others

    1991-01-01

    Children with speech disorders had lower short-term memory capacity and smaller word length effect than control children. Children with speech disorders also had reduced speech-motor activity during rehearsal. Results suggest that speech rate may be a causal determinant of verbal short-term memory capacity. (BC)

  16. Generation of Spinal Motor Neurons from Human Pluripotent Stem Cells.

    Science.gov (United States)

    Santos, David P; Kiskinis, Evangelos

    2017-01-01

    Human embryonic stem cells (ESCs) are characterized by their unique ability to self-renew indefinitely, as well as to differentiate into any cell type of the human body. Induced pluripotent stem cells (iPSCs) share these salient characteristics with ESCs and can easily be generated from any given individual by reprogramming somatic cell types such as fibroblasts or blood cells. The spinal motor neuron (MN) is a specialized neuronal subtype that synapses with muscle to control movement. Here, we present a method to generate functional, postmitotic, spinal motor neurons through the directed differentiation of ESCs and iPSCs by the use of small molecules. These cells can be utilized to study the development and function of human motor neurons in healthy and disease states.

  17. Hemispheric Lateralization of Motor Thresholds in Relation to Stuttering

    Science.gov (United States)

    Alm, Per A.; Karlsson, Ragnhild; Sundberg, Madeleine; Axelson, Hans W.

    2013-01-01

    Stuttering is a complex speech disorder. Previous studies indicate a tendency towards elevated motor threshold for the left hemisphere, as measured using transcranial magnetic stimulation (TMS). This may reflect a monohemispheric motor system impairment. The purpose of the study was to investigate the relative side-to-side difference (asymmetry) and the absolute levels of motor threshold for the hand area, using TMS in adults who stutter (n = 15) and in controls (n = 15). In accordance with the hypothesis, the groups differed significantly regarding the relative side-to-side difference of finger motor threshold (p = 0.0026), with the stuttering group showing higher motor threshold of the left hemisphere in relation to the right. Also the absolute level of the finger motor threshold for the left hemisphere differed between the groups (p = 0.049). The obtained results, together with previous investigations, provide support for the hypothesis that stuttering tends to be related to left hemisphere motor impairment, and possibly to a dysfunctional state of bilateral speech motor control. PMID:24146930

  18. Hemispheric lateralization of motor thresholds in relation to stuttering.

    Directory of Open Access Journals (Sweden)

    Per A Alm

    Full Text Available Stuttering is a complex speech disorder. Previous studies indicate a tendency towards elevated motor threshold for the left hemisphere, as measured using transcranial magnetic stimulation (TMS. This may reflect a monohemispheric motor system impairment. The purpose of the study was to investigate the relative side-to-side difference (asymmetry and the absolute levels of motor threshold for the hand area, using TMS in adults who stutter (n = 15 and in controls (n = 15. In accordance with the hypothesis, the groups differed significantly regarding the relative side-to-side difference of finger motor threshold (p = 0.0026, with the stuttering group showing higher motor threshold of the left hemisphere in relation to the right. Also the absolute level of the finger motor threshold for the left hemisphere differed between the groups (p = 0.049. The obtained results, together with previous investigations, provide support for the hypothesis that stuttering tends to be related to left hemisphere motor impairment, and possibly to a dysfunctional state of bilateral speech motor control.

  19. Poor neuro-motor tuning of the human larynx: a comparison of sung and whistled pitch imitation

    Science.gov (United States)

    Johnson, Joseph F.; Kotz, Sonja A.

    2018-01-01

    Vocal imitation is a hallmark of human communication that underlies the capacity to learn to speak and sing. Even so, poor vocal imitation abilities are surprisingly common in the general population and even expert vocalists cannot match the precision of a musical instrument. Although humans have evolved a greater degree of control over the laryngeal muscles that govern voice production, this ability may be underdeveloped compared with control over the articulatory muscles, such as the tongue and lips, volitional control of which emerged earlier in primate evolution. Human participants imitated simple melodies by either singing (i.e. producing pitch with the larynx) or whistling (i.e. producing pitch with the lips and tongue). Sung notes were systematically biased towards each individual's habitual pitch, which we hypothesize may act to conserve muscular effort. Furthermore, while participants who sung more precisely also whistled more precisely, sung imitations were less precise than whistled imitations. The laryngeal muscles that control voice production are under less precise control than the oral muscles that are involved in whistling. This imprecision may be due to the relatively recent evolution of volitional laryngeal-motor control in humans, which may be tuned just well enough for the coarse modulation of vocal-pitch in speech. PMID:29765635

  20. A predictive model for diagnosing stroke-related apraxia of speech.

    Science.gov (United States)

    Ballard, Kirrie J; Azizi, Lamiae; Duffy, Joseph R; McNeil, Malcolm R; Halaki, Mark; O'Dwyer, Nicholas; Layfield, Claire; Scholl, Dominique I; Vogel, Adam P; Robin, Donald A

    2016-01-29

    Diagnosis of the speech motor planning/programming disorder, apraxia of speech (AOS), has proven challenging, largely due to its common co-occurrence with the language-based impairment of aphasia. Currently, diagnosis is based on perceptually identifying and rating the severity of several speech features. It is not known whether all, or a subset of the features, are required for a positive diagnosis. The purpose of this study was to assess predictor variables for the presence of AOS after left-hemisphere stroke, with the goal of increasing diagnostic objectivity and efficiency. This population-based case-control study involved a sample of 72 cases, using the outcome measure of expert judgment on presence of AOS and including a large number of independently collected candidate predictors representing behavioral measures of linguistic, cognitive, nonspeech oral motor, and speech motor ability. We constructed a predictive model using multiple imputation to deal with missing data; the Least Absolute Shrinkage and Selection Operator (Lasso) technique for variable selection to define the most relevant predictors, and bootstrapping to check the model stability and quantify the optimism of the developed model. Two measures were sufficient to distinguish between participants with AOS plus aphasia and those with aphasia alone, (1) a measure of speech errors with words of increasing length and (2) a measure of relative vowel duration in three-syllable words with weak-strong stress pattern (e.g., banana, potato). The model has high discriminative ability to distinguish between cases with and without AOS (c-index=0.93) and good agreement between observed and predicted probabilities (calibration slope=0.94). Some caution is warranted, given the relatively small sample specific to left-hemisphere stroke, and the limitations of imputing missing data. These two speech measures are straightforward to collect and analyse, facilitating use in research and clinical settings. Copyright

  1. Whole-exome sequencing supports genetic heterogeneity in childhood apraxia of speech.

    Science.gov (United States)

    Worthey, Elizabeth A; Raca, Gordana; Laffin, Jennifer J; Wilk, Brandon M; Harris, Jeremy M; Jakielski, Kathy J; Dimmock, David P; Strand, Edythe A; Shriberg, Lawrence D

    2013-10-02

    Childhood apraxia of speech (CAS) is a rare, severe, persistent pediatric motor speech disorder with associated deficits in sensorimotor, cognitive, language, learning and affective processes. Among other neurogenetic origins, CAS is the disorder segregating with a mutation in FOXP2 in a widely studied, multigenerational London family. We report the first whole-exome sequencing (WES) findings from a cohort of 10 unrelated participants, ages 3 to 19 years, with well-characterized CAS. As part of a larger study of children and youth with motor speech sound disorders, 32 participants were classified as positive for CAS on the basis of a behavioral classification marker using auditory-perceptual and acoustic methods that quantify the competence, precision and stability of a speaker's speech, prosody and voice. WES of 10 randomly selected participants was completed using the Illumina Genome Analyzer IIx Sequencing System. Image analysis, base calling, demultiplexing, read mapping, and variant calling were performed using Illumina software. Software developed in-house was used for variant annotation, prioritization and interpretation to identify those variants likely to be deleterious to neurodevelopmental substrates of speech-language development. Among potentially deleterious variants, clinically reportable findings of interest occurred on a total of five chromosomes (Chr3, Chr6, Chr7, Chr9 and Chr17), which included six genes either strongly associated with CAS (FOXP1 and CNTNAP2) or associated with disorders with phenotypes overlapping CAS (ATP13A4, CNTNAP1, KIAA0319 and SETX). A total of 8 (80%) of the 10 participants had clinically reportable variants in one or two of the six genes, with variants in ATP13A4, KIAA0319 and CNTNAP2 being the most prevalent. Similar to the results reported in emerging WES studies of other complex neurodevelopmental disorders, our findings from this first WES study of CAS are interpreted as support for heterogeneous genetic origins of

  2. Computational Neural Modeling of Speech Motor Control in Childhood Apraxia of Speech (CAS)

    Science.gov (United States)

    Terband, Hayo; Maassen, Ben; Guenther, Frank H.; Brumberg, Jonathan

    2009-01-01

    Purpose: Childhood apraxia of speech (CAS) has been associated with a wide variety of diagnostic descriptions and has been shown to involve different symptoms during successive stages of development. In the present study, the authors attempted to associate the symptoms of CAS in a particular developmental stage with particular…

  3. Effect of Deep Brain Stimulation on Speech Performance in Parkinson's Disease

    Directory of Open Access Journals (Sweden)

    Sabine Skodda

    2012-01-01

    Full Text Available Deep brain stimulation (DBS has been reported to be successful in relieving the core motor symptoms of Parkinson's disease (PD and motor fluctuations in the more advanced stages of the disease. However, data on the effects of DBS on speech performance are inconsistent. While there are some series of patients documenting that speech function was relatively unaffected by DBS of the nucleus subthalamicus (STN, other investigators reported on improvements of distinct parameters of oral control and voice. Though, these ameliorations of single speech modalities were not always accompanied by an improvement of overall speech intelligibility. On the other hand, there are also indications for an induction of dysarthria as an adverse effect of STN-DBS occurring at least in some patients with PD. Since a deterioration of speech function has more often been observed under high stimulation amplitudes, this phenomenon has been ascribed to a spread of current-to-adjacent pathways which might also be the reason for the sporadic observation of an onset of dysarthria under DBS of other basal ganglia targets (e.g., globus pallidus internus/GPi or thalamus/Vim. The aim of this paper is to review and evaluate reports in the literature on the effects of DBS on speech function in PD.

  4. Intensive treatment with ultrasound visual feedback for speech sound errors in childhood apraxia

    Directory of Open Access Journals (Sweden)

    Jonathan L Preston

    2016-08-01

    Full Text Available Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients additional knowledge about their tongue shapes when attempting to produce sounds that are in error. The additional feedback may assist children with childhood apraxia of speech in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 diagnosed with childhood apraxia of speech attended 16 hours of speech therapy over a two-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor

  5. Effects of ethanol intoxication on speech suprasegmentals

    Science.gov (United States)

    Hollien, Harry; Dejong, Gea; Martin, Camilo A.; Schwartz, Reva; Liljegren, Kristen

    2001-12-01

    The effects of ingesting ethanol have been shown to be somewhat variable in humans. To date, there appear to be but few universals. Yet, the question often arises: is it possible to determine if a person is intoxicated by observing them in some manner? A closely related question is: can speech be used for this purpose and, if so, can the degree of intoxication be determined? One of the many issues associated with these questions involves the relationships between a person's paralinguistic characteristics and the presence and level of inebriation. To this end, young, healthy speakers of both sexes were carefully selected and sorted into roughly equal groups of light, moderate, and heavy drinkers. They were asked to produce four types of utterances during a learning phase, when sober and at four strictly controlled levels of intoxication (three ascending and one descending). The primary motor speech measures employed were speaking fundamental frequency, speech intensity, speaking rate and nonfluencies. Several statistically significant changes were found for increasing intoxication; the primary ones included rises in F0, in task duration and for nonfluencies. Minor gender differences were found but they lacked statistical significance. So did the small differences among the drinking category subgroups and the subject groupings related to levels of perceived intoxication. Finally, although it may be concluded that certain changes in speech suprasegmentals will occur as a function of increasing intoxication, these patterns cannot be viewed as universal since a few subjects (about 20%) exhibited no (or negative) changes.

  6. Quantification and Systematic Characterization of Stuttering-Like Disfluencies in Acquired Apraxia of Speech.

    Science.gov (United States)

    Bailey, Dallin J; Blomgren, Michael; DeLong, Catharine; Berggren, Kiera; Wambaugh, Julie L

    2017-06-22

    The purpose of this article is to quantify and describe stuttering-like disfluencies in speakers with acquired apraxia of speech (AOS), utilizing the Lidcombe Behavioural Data Language (LBDL). Additional purposes include measuring test-retest reliability and examining the effect of speech sample type on disfluency rates. Two types of speech samples were elicited from 20 persons with AOS and aphasia: repetition of mono- and multisyllabic words from a protocol for assessing AOS (Duffy, 2013), and connected speech tasks (Nicholas & Brookshire, 1993). Sampling was repeated at 1 and 4 weeks following initial sampling. Stuttering-like disfluencies were coded using the LBDL, which is a taxonomy that focuses on motoric aspects of stuttering. Disfluency rates ranged from 0% to 13.1% for the connected speech task and from 0% to 17% for the word repetition task. There was no significant effect of speech sampling time on disfluency rate in the connected speech task, but there was a significant effect of time for the word repetition task. There was no significant effect of speech sample type. Speakers demonstrated both major types of stuttering-like disfluencies as categorized by the LBDL (fixed postures and repeated movements). Connected speech samples yielded more reliable tallies over repeated measurements. Suggestions are made for modifying the LBDL for use in AOS in order to further add to systematic descriptions of motoric disfluencies in this disorder.

  7. Language and motor abilities of preschool children who stutter: Evidence from behavioral and kinematic indices of nonword repetition performance

    Science.gov (United States)

    Smith, Anne; Goffman, Lisa; Sasisekaran, Jayanthi; Weber-Fox, Christine

    2012-01-01

    Stuttering is a disorder of speech production that typically arises in the preschool years, and many accounts of its onset and development implicate language and motor processes as critical underlying factors. There have, however, been very few studies of speech motor control processes in preschool children who stutter. Hearing novel nonwords and reproducing them engages multiple neural networks, including those involved in phonological analysis and storage and speech motor programming and execution. We used this task to explore speech motor and language abilities of 31 children aged 4–5 years who were diagnosed as stuttering. We also used sensitive and specific standardized tests of speech and language abilities to determine which of the children who stutter had concomitant language and/or phonological disorders. Approximately half of our sample of stuttering children had language and/or phonological disorders. As previous investigations would suggest, the stuttering children with concomitant language or speech sound disorders produced significantly more errors on the nonword repetition task compared to typically developing children. In contrast, the children who were diagnosed as stuttering, but who had normal speech sound and language abilities, performed the nonword repetition task with equal accuracy compared to their normally fluent peers. Analyses of interarticulator motions during accurate and fluent productions of the nonwords revealed that the children who stutter (without concomitant disorders) showed higher variability in oral motor coordination indices. These results provide new evidence that preschool children diagnosed as stuttering lag their typically developing peers in maturation of speech motor control processes. Educational objectives The reader will be able to: (a) discuss why performance on nonword repetition tasks has been investigated in children who stutter; (b) discuss why children who stutter in the current study had a higher incidence

  8. Mapping genetic influences on the corticospinal motor system in humans

    DEFF Research Database (Denmark)

    Cheeran, B J; Ritter, C; Rothwell, J C

    2009-01-01

    of the contribution of single nucleotide polymorphisms (SNP) and variable number tandem repeats. In humans, the corticospinal motor system is essential to the acquisition of fine manual motor skills which require a finely tuned coordination of activity in distal forelimb muscles. Here we review recent brain mapping......It is becoming increasingly clear that genetic variations account for a certain amount of variance in the acquisition and maintenance of different skills. Until now, several levels of genetic influences were examined, ranging from global heritability estimates down to the analysis...... studies that have begun to explore the influence of functional genetic variation as well as mutations on function and structure of the human corticospinal motor system, and also the clinical implications of these studies. Transcranial magnetic stimulation of the primary motor hand area revealed...

  9. Speech Analysis of Bengali Speaking Children with Repaired Cleft Lip & Palate

    Science.gov (United States)

    Chakrabarty, Madhushree; Kumar, Suman; Chatterjee, Indranil; Maheshwari, Neha

    2012-01-01

    The present study aims at analyzing speech samples of four Bengali speaking children with repaired cleft palates with a view to differentiate between the misarticulations arising out of a deficit in linguistic skills and structural or motoric limitations. Spontaneous speech samples were collected and subjected to a number of linguistic analyses…

  10. Distinct olfactory cross-modal effects on the human motor system.

    Directory of Open Access Journals (Sweden)

    Simone Rossi

    Full Text Available BACKGROUND: Converging evidence indicates that action observation and action-related sounds activate cross-modally the human motor system. Since olfaction, the most ancestral sense, may have behavioural consequences on human activities, we causally investigated by transcranial magnetic stimulation (TMS whether food odour could additionally facilitate the human motor system during the observation of grasping objects with alimentary valence, and the degree of specificity of these effects. METHODOLOGY/PRINCIPAL FINDINGS: In a repeated-measure block design, carried out on 24 healthy individuals participating to three different experiments, we show that sniffing alimentary odorants immediately increases the motor potentials evoked in hand muscles by TMS of the motor cortex. This effect was odorant-specific and was absent when subjects were presented with odorants including a potentially noxious trigeminal component. The smell-induced corticospinal facilitation of hand muscles during observation of grasping was an additive effect which superimposed to that induced by the mere observation of grasping actions for food or non-food objects. The odour-induced motor facilitation took place only in case of congruence between the sniffed odour and the observed grasped food, and specifically involved the muscle acting as prime mover for hand/fingers shaping in the observed action. CONCLUSIONS/SIGNIFICANCE: Complex olfactory cross-modal effects on the human corticospinal system are physiologically demonstrable. They are odorant-specific and, depending on the experimental context, muscle- and action-specific as well. This finding implies potential new diagnostic and rehabilitative applications.

  11. Action observation and mirror neuron network: a tool for motor stroke rehabilitation.

    Science.gov (United States)

    Sale, P; Franceschini, M

    2012-06-01

    Mirror neurons are a specific class of neurons that are activated and discharge both during observation of the same or similar motor act performed by another individual and during the execution of a motor act. Different studies based on non invasive neuroelectrophysiological assessment or functional brain imaging techniques have demonstrated the presence of the mirror neuron and their mechanism in humans. Various authors have demonstrated that in the human these networks are activated when individuals learn motor actions via execution (as in traditional motor learning), imitation, observation (as in observational learning) and motor imagery. Activation of these brain areas (inferior parietal lobe and the ventral premotor cortex, as well as the caudal part of the inferior frontal gyrus [IFG]) following observation or motor imagery may thereby facilitate subsequent movement execution by directly matching the observed or imagined action to the internal simulation of that action. It is therefore believed that this multi-sensory action-observation system enables individuals to (re) learn impaired motor functions through the activation of these internal action-related representations. In humans, the mirror mechanism is also located in various brain segment: in Broca's area, which is involved in language processing and speech production and not only in centres that mediate voluntary movement, but also in cortical areas that mediate visceromotor emotion-related behaviours. On basis of this finding, during the last 10 years various studies were carry out regarding the clinical use of action observation for motor rehabilitation of sub-acute and chronic stroke patients.

  12. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  13. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  14. Modulation of Speech Motor Learning with Transcranial Direct Current Stimulation of the Inferior Parietal Lobe

    Directory of Open Access Journals (Sweden)

    Mickael L. D. Deroche

    2017-12-01

    Full Text Available The inferior parietal lobe (IPL is a region of the cortex believed to participate in speech motor learning. In this study, we investigated whether transcranial direct current stimulation (tDCS of the IPL could influence the extent to which healthy adults (1 adapted to a sensory alteration of their own auditory feedback, and (2 changed their perceptual representation. Seventy subjects completed three tasks: a baseline perceptual task that located the phonetic boundary between the vowels /e/ and /a/; a sensorimotor adaptation task in which subjects produced the word “head” under conditions of altered or unaltered feedback; and a post-adaptation perceptual task identical to the first. Subjects were allocated to four groups which differed in current polarity and feedback manipulation. Subjects who received anodal tDCS to their IPL (i.e., presumably increasing cortical excitability lowered their first formant frequency (F1 by 10% in opposition to the upward shift in F1 in their auditory feedback. Subjects who received the same stimulation with unaltered feedback did not change their production. Subjects who received cathodal tDCS to their IPL (i.e., presumably decreasing cortical excitability showed a 5% adaptation to the F1 alteration similar to subjects who received sham tDCS. A subset of subjects returned a few days later to reiterate the same protocol but without tDCS, enabling assessment of any facilitatory effects of the previous tDCS. All subjects exhibited a 5% adaptation effect. In addition, across all subjects and for the two recording sessions, the phonetic boundary was shifted toward the vowel /e/ being repeated, consistently with the selective adaptation effect, but a correlation between perception and production suggested that anodal tDCS had enhanced this perceptual shift. In conclusion, we successfully demonstrated that anodal tDCS could (1 enhance the motor adaptation to a sensory alteration, and (2 potentially affect the

  15. The Role of Rhythm in Speech and Language Rehabilitation: The SEP Hypothesis.

    Science.gov (United States)

    Fujii, Shinya; Wan, Catherine Y

    2014-01-01

    For thousands of years, human beings have engaged in rhythmic activities such as drumming, dancing, and singing. Rhythm can be a powerful medium to stimulate communication and social interactions, due to the strong sensorimotor coupling. For example, the mere presence of an underlying beat or pulse can result in spontaneous motor responses such as hand clapping, foot stepping, and rhythmic vocalizations. Examining the relationship between rhythm and speech is fundamental not only to our understanding of the origins of human communication but also in the treatment of neurological disorders. In this paper, we explore whether rhythm has therapeutic potential for promoting recovery from speech and language dysfunctions. Although clinical studies are limited to date, existing experimental evidence demonstrates rich rhythmic organization in both music and language, as well as overlapping brain networks that are crucial in the design of rehabilitation approaches. Here, we propose the "SEP" hypothesis, which postulates that (1) "sound envelope processing" and (2) "synchronization and entrainment to pulse" may help stimulate brain networks that underlie human communication. Ultimately, we hope that the SEP hypothesis will provide a useful framework for facilitating rhythm-based research in various patient populations.

  16. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    Science.gov (United States)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  17. Prediction and imitation in speech

    Directory of Open Access Journals (Sweden)

    Chiara eGambi

    2013-06-01

    Full Text Available It has been suggested that intra- and inter-speaker variability in speech are correlated. Interlocutors have been shown to converge on various phonetic dimensions. In addition, speakers imitate the phonetic properties of voices they are exposed to in shadowing, repetition, and even passive listening tasks. We review three theoretical accounts of speech imitation and convergence phenomena: (i the Episodic Theory (ET of speech perception and production (Goldinger, 1998; (ii the Motor Theory (MT of speech perception (Liberman and Whalen, 2000;Galantucci et al., 2006 ; (iii Communication Accommodation Theory (CAT; Giles et al., 1991;Giles and Coupland, 1991. We argue that no account is able to explain all the available evidence. In particular, there is a need to integrate low-level, mechanistic accounts (like ET and MT and higher-level accounts (like CAT. We propose that this is possible within the framework of an integrated theory of production and comprehension (Pickering & Garrod, in press. Similarly to both ET and MT, this theory assumes parity between production and perception. Uniquely, however, it posits that listeners simulate speakers’ utterances by computing forward-model predictions at many different levels, which are then compared to the incoming phonetic input. In our account phonetic imitation can be achieved via the same mechanism that is responsible for sensorimotor adaptation; i.e. the correction of prediction errors. In addition, the model assumes that the degree to which sensory prediction errors lead to motor adjustments is context-dependent. The notion of context subsumes both the preceding linguistic input and non-linguistic attributes of the situation (e.g., the speaker’s and listener’s social identities, their conversational roles, the listener’s intention to imitate.

  18. The connection of hemispheric activity in the field of audioverbal perception and the progressive lateralization of speech and motor processes.

    Directory of Open Access Journals (Sweden)

    Kovyazina, M.S.

    2015-07-01

    Full Text Available This article discusses the connection of hemispheric control over audioverbal perception processes and such individual features as “leading hand” (right-handedness and lefthandedness. We present a literature review and description of our research to provide evidence of the complexity and ambiguity of this connection. The method of dichotic listening was used for diagnosing audioverbal perception lateralization. This method allows estimation of the right-ear coefficient (REC, the efficiency coefficient (EC, and the effectiveness ratio (ER of different aspects of audioverbal perception. Our research involved 47 persons with a leading right hand (mean age, 29.04±9.97 years and 32 persons with a leading left hand (mean age, 29.41±10.34 years. Different hypotheses about the mechanisms of hemispheric control over audioverbal and motor processes were assessed. The research showed that both the leftand right-handers’ audioverbal perception characteristics depended mainly on right-hemisphere activity. The most dynamic and sensitive index of the functioning of the two hemispheres during dichotic listening was the efficiency coefficient of stimuli reproduction through the left ear (EC of the left ear. It turns out that this index depends on the coincidence/noncoincidence of the leading hemispheres in speech and motor processes. The highest efficiency of audioverbal perception revealed itself in the left-handers with a leading left ear (the hemispheric-control coincidence, and the lowest efficiency was in the left-handers with a leading right ear (the hemispheric-control divergence. The right-handers were characterized by less variation in values, although the influence of the coincidence/noncoincidence of the leading hemispheres in speech and motor processes also revealed itself as a tendency. This consistent pattern points out the necessity for further research on asymmetries of the different modalities that takes into account their probable

  19. Mobile communication jacket for people with severe speech impairment.

    Science.gov (United States)

    Lampe, Renée; Blumenstein, Tobias; Turova, Varvara; Alves-Pinto, Ana

    2018-04-01

    Cerebral palsy is a movement disorder caused by damage to motor control areas of the developing brain during early childhood. Motor disorders can also affect the ability to produce clear speech and to communicate. The aim of this study was to develop and to test a prototype of an assistive tool with an embedded mobile communication device to support patients with severe speech impairments. A prototype was developed by equipping a cycling jacket with a display, a small keyboard, a LED and an alarm system, all controlled by a microcontroller. Functionality of the prototype was tested in six participants (aged 7-20 years) with cerebral palsy and global developmental disorder and three healthy persons. A patient questionnaire consisting of seven items was used as an evaluation tool. A working prototype of the communication jacket was developed and tested. The questionnaire elicited positive responses from participants. Improvements to correct revealed weaknesses were proposed. Enhancements like voice output of pre-selected phrases and enlarged display were implemented. Integration in a jacket makes the system mobile and continuously available to the user. The communication jacket may be of great benefit to patients with motor and speech impairments. Implications for Rehabilitation The communication jacket developed can be easily used by people with movement and speech impairment. All technical components are integrated in a garment and do not have to be held with the hands or transported separately. The system is adaptable to individual use. Both expected and unexpected events can be dealt with, which contributes to the quality of life and self-fulfilment.

  20. SynFace—Speech-Driven Facial Animation for Virtual Speech-Reading Support

    Directory of Open Access Journals (Sweden)

    Giampiero Salvi

    2009-01-01

    Full Text Available This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animated talking head. Firstly, we describe the system architecture, consisting of a 3D animated face model controlled from the speech input by a specifically optimised phonetic recogniser. Secondly, we report on speech intelligibility experiments with focus on multilinguality and robustness to audio quality. The system, already available for Swedish, English, and Flemish, was optimised for German and for Swedish wide-band speech quality available in TV, radio, and Internet communication. Lastly, the paper covers experiments with nonverbal motions driven from the speech signal. It is shown that turn-taking gestures can be used to affect the flow of human-human dialogues. We have focused specifically on two categories of cues that may be extracted from the acoustic signal: prominence/emphasis and interactional cues (turn-taking/back-channelling.

  1. A case of crossed aphasia with apraxia of speech

    Directory of Open Access Journals (Sweden)

    Yogesh Patidar

    2013-01-01

    Full Text Available Apraxia of speech (AOS is a rare, but well-defined motor speech disorder. It is characterized by irregular articulatory errors, attempts of self-correction and persistent prosodic abnormalities. Similar to aphasia, AOS is also localized to the dominant cerebral hemisphere. We report a case of Crossed Aphasia with AOS in a 48-year-old right-handed man due to an ischemic infarct in right cerebral hemisphere.

  2. Interpretation of basic concepts in theories of human motor abilities

    Directory of Open Access Journals (Sweden)

    Petrović Adam

    2014-01-01

    Full Text Available The basic aim of this research is to point to the possible language, logical and knowledge problems in interpretation and understanding of basic concepts in theories of motor abilities (TMA. Such manner of review is not directed only to 'mere understanding', it can lead to a new growth of scientific knowledge. Accordingly, the research question is set, i.e. the research issue: Is there a language, logical and knowledge agreement between basic concepts in the theories of human motor abilities? The answer to the set question direct that a more complete agreement between the basic concepts in the theories of human motor abilities should be searched in a scientific dialog between researchers of various beliefs.

  3. Language and motor abilities of preschool children who stutter: evidence from behavioral and kinematic indices of nonword repetition performance.

    Science.gov (United States)

    Smith, Anne; Goffman, Lisa; Sasisekaran, Jayanthi; Weber-Fox, Christine

    2012-12-01

    Stuttering is a disorder of speech production that typically arises in the preschool years, and many accounts of its onset and development implicate language and motor processes as critical underlying factors. There have, however, been very few studies of speech motor control processes in preschool children who stutter. Hearing novel nonwords and reproducing them engages multiple neural networks, including those involved in phonological analysis and storage and speech motor programming and execution. We used this task to explore speech motor and language abilities of 31 children aged 4-5 years who were diagnosed as stuttering. We also used sensitive and specific standardized tests of speech and language abilities to determine which of the children who stutter had concomitant language and/or phonological disorders. Approximately half of our sample of stuttering children had language and/or phonological disorders. As previous investigations would suggest, the stuttering children with concomitant language or speech sound disorders produced significantly more errors on the nonword repetition task compared to typically developing children. In contrast, the children who were diagnosed as stuttering, but who had normal speech sound and language abilities, performed the nonword repetition task with equal accuracy compared to their normally fluent peers. Analyses of interarticulator motions during accurate and fluent productions of the nonwords revealed that the children who stutter (without concomitant disorders) showed higher variability in oral motor coordination indices. These results provide new evidence that preschool children diagnosed as stuttering lag their typically developing peers in maturation of speech motor control processes. The reader will be able to: (a) discuss why performance on nonword repetition tasks has been investigated in children who stutter; (b) discuss why children who stutter in the current study had a higher incidence of concomitant language

  4. Simultaneous Treatment of Grammatical and Speech-Comprehensibility Deficits in Children with Down Syndrome

    Science.gov (United States)

    Camarata, Stephen; Yoder, Paul; Camarata, Mary

    2006-01-01

    Children with Down syndrome often display speech-comprehensibility and grammatical deficits beyond what would be predicted based upon general mental age. Historically, speech-comprehensibility has often been treated using traditional articulation therapy and oral-motor training so there may be little or no coordination of grammatical and…

  5. A probabilistic map of the human ventral sensorimotor cortex using electrical stimulation.

    Science.gov (United States)

    Breshears, Jonathan D; Molinaro, Annette M; Chang, Edward F

    2015-08-01

    The human ventral sensorimotor cortex (vSMC) is involved in facial expression, mastication, and swallowing, as well as the dynamic and highly coordinated movements of human speech production. However, vSMC organization remains poorly understood, and previously published population-driven maps of its somatotopy do not accurately reflect the variability across individuals in a quantitative, probabilistic fashion. The goal of this study was to describe the responses to electrical stimulation of the vSMC, generate probabilistic maps of function in the vSMC, and quantify the variability across individuals. Photographic, video, and stereotactic MRI data of intraoperative electrical stimulation of the vSMC were collected for 33 patients undergoing awake craniotomy. Stimulation sites were converted to a 2D coordinate system based on anatomical landmarks. Motor, sensory, and speech stimulation responses were reviewed and classified. Probabilistic maps of stimulation responses were generated, and spatial variance was quantified. In 33 patients, the authors identified 194 motor, 212 sensory, 61 speech-arrest, and 27 mixed responses. Responses were complex, stereotyped, and mostly nonphysiological movements, involving hand, orofacial, and laryngeal musculature. Within individuals, the presence of oral movement representations varied; however, the dorsal-ventral order was always preserved. The most robust motor responses were jaw (probability 0.85), tongue (0.64), lips (0.58), and throat (0.52). Vocalizations were seen in 6 patients (0.18), more dorsally near lip and dorsal throat areas. Sensory responses were spatially dispersed; however, patients' subjective reports were highly precise in localization within the mouth. The most robust responses included tongue (0.82) and lips (0.42). The probability of speech arrest was 0.85, highest 15-20 mm anterior to the central sulcus and just dorsal to the sylvian fissure, in the anterior precentral gyrus or pars opercularis. The

  6. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    Science.gov (United States)

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  7. Outcomes of physical therapy, speech pathology, and occupational therapy for people with motor neuron disease: a systematic review.

    Science.gov (United States)

    Morris, Meg E; Perry, Alison; Bilney, Belinda; Curran, Andrea; Dodd, Karen; Wittwer, Joanne E; Dalton, Gregory W

    2006-09-01

    This article describes a systematic review and critical evaluation of the international literature on the effects of physical therapy, speech pathology, and occupational therapy for people with motor neuron disease (PwMND). The results were interpreted using the framework of the International Classification of Functioning, Disability and Health. This enabled us to summarize therapy outcomes at the level of body structure and function, activity limitations, participation restrictions, and quality of life. Databases searched included MEDLINE, PUBMED, CINAHL, PSYCInfo, Data base of Abstracts of Reviews of Effectiveness (DARE), The Physiotherapy Evidence data base (PEDro), Evidence Based Medicine Reviews (EMBASE), the Cochrane database of systematic reviews, and the Cochrane Controlled Trials Register. Evidence was graded according to the Harbour and Miller classification. Most of the evidence was found to be at the level of "clinical opinion" rather than of controlled clinical trials. Several nonrandomized small group and "observational studies" provided low-level evidence to support physical therapy for improving muscle strength and pulmonary function. There was also some evidence to support the effectiveness of speech pathology interventions for dysarthria. The search identified a small number of studies on occupational therapy for PwMND, which were small, noncontrolled pre-post-designs or clinical reports.

  8. The discovery of human auditory-motor entrainment and its role in the development of neurologic music therapy.

    Science.gov (United States)

    Thaut, Michael H

    2015-01-01

    The discovery of rhythmic auditory-motor entrainment in clinical populations was a historical breakthrough in demonstrating for the first time a neurological mechanism linking music to retraining brain and behavioral functions. Early pilot studies from this research center were followed up by a systematic line of research studying rhythmic auditory stimulation on motor therapies for stroke, Parkinson's disease, traumatic brain injury, cerebral palsy, and other movement disorders. The comprehensive effects on improving multiple aspects of motor control established the first neuroscience-based clinical method in music, which became the bedrock for the later development of neurologic music therapy. The discovery of entrainment fundamentally shifted and extended the view of the therapeutic properties of music from a psychosocially dominated view to a view using the structural elements of music to retrain motor control, speech and language function, and cognitive functions such as attention and memory. © 2015 Elsevier B.V. All rights reserved.

  9. The Economy of Fluent Speaking: Phrase-Level Reduction in a Patient with Pure Apraxia of Speech

    Science.gov (United States)

    Staiger, Anja; Ruttenauer, Anna; Ziegler, Wolfram

    2010-01-01

    The term "phrase-level reduction" refers to transformations of the phonetic forms of words in connected speech. They are a characteristic property of fluent speech in normal speakers. Phrase-level reductions contribute to a reduction of articulatory-motor effort and constitute an important aspect of speech naturalness. So far, these phenomena have…

  10. Speech rhythms and multiplexed oscillatory sensory coding in the human brain.

    Directory of Open Access Journals (Sweden)

    Joachim Gross

    2013-12-01

    Full Text Available Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta and the amplitude of high-frequency (gamma oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations.

  11. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    Science.gov (United States)

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  12. Investigation of Speech Impairments in A Child with HIV: The Study of Phonological Processes: Case Report

    Directory of Open Access Journals (Sweden)

    Zahra Ilkhani

    2017-03-01

    Full Text Available Human immunodeficiency virus (HIV is a viral disease with immunodeficiency in human. So, it can involve different areas such as language, speech, motor and memory. The present research, as a case report, introducing the characteristics of phonological processes of a child who had Aids and lived in a nursery through referring and professional assessing in a speech therapy clinic. The child was a 4 year old boy who was in HIV base on blood test. Speech skills was assessed based on DEAP and language assessment was analyzed according to TOLD-P3. He talked with single word. He used two words sentences rarely. According to language assessment (TOLD-P3, semantic, syntax and phonology features were tested. So he was in emerging language stage. Also his expressive language was lower than his perceive language. In addition, based on DEAP-P test, phonological process of substitution type has been recognized most. Also, the most of the substitution phonological process which accrued have been velar fronting. This study showed that the most phonological process in a child with HIV was the process of substitution. It may be a risk factor for decreasing speech intelligibility. With regard to the results of the present research that showed that the subject had the disorder and there are limited researches in this area,

  13. A functional near-infrared spectroscopic investigation of speech production during reading.

    Science.gov (United States)

    Wan, Nick; Hancock, Allison S; Moon, Todd K; Gillam, Ronald B

    2018-03-01

    This study was designed to test the extent to which speaking processes related to articulation and voicing influence Functional Near Infrared Spectroscopy (fNIRS) measures of cortical hemodynamics and functional connectivity. Participants read passages in three conditions (oral reading, silent mouthing, and silent reading) while undergoing fNIRS imaging. Area under the curve (AUC) analyses of the oxygenated and deoxygenated hemodynamic response function concentration values were compared for each task across five regions of interest. There were significant region main effects for both oxy and deoxy AUC analyses, and a significant region × task interaction for deoxy AUC favoring the oral reading condition over the silent reading condition for two nonmotor regions. Assessment of functional connectivity using Granger Causality revealed stronger networks between motor areas during oral reading and stronger networks between language areas during silent reading. There was no evidence that the hemodynamic flow from motor areas during oral reading compromised measures of language-related neural activity in nonmotor areas. However, speech movements had small, but measurable effects on fNIRS measures of neural connections between motor and nonmotor brain areas across the perisylvian region, even after wavelet filtering. Therefore, researchers studying speech processes with fNIRS should use wavelet filtering during preprocessing to reduce speech motion artifacts, incorporate a nonspeech communication or language control task into the research design, and conduct a connectivity analysis to adequately assess the impact of functional speech on the hemodynamic response across the perisylvian region. © 2017 Wiley Periodicals, Inc.

  14. Coupling dynamics in speech gestures: amplitude and rate influences.

    Science.gov (United States)

    van Lieshout, Pascal H H M

    2017-08-01

    Speech is a complex oral motor function that involves multiple articulators that need to be coordinated in space and time at relatively high movement speeds. How this is accomplished remains an important and largely unresolved empirical question. From a coordination dynamics perspective, coordination involves the assembly of coordinative units that are characterized by inherently stable coupling patterns that act as attractor states for task-specific actions. In the motor control literature, one particular model formulated by Haken et al. (Biol Cybern 51(5):347-356, 1985) or HKB has received considerable attention in the way it can account for changes in the nature and stability of specific coordination patterns between limbs or between limbs and external stimuli. In this model (and related versions), movement amplitude is considered a critical factor in the formation of these patterns. Several studies have demonstrated its role for bimanual coordination and similar types of tasks, but for speech motor control such studies are lacking. The current study describes a systematic approach to evaluate the impact of movement amplitude and movement duration on coordination stability in the production of bilabial and tongue body gestures for specific vowel-consonant-vowel strings. The vowel combinations that were used induced a natural contrast in movement amplitude at three speaking rate conditions (slow, habitual, fast). Data were collected on ten young adults using electromagnetic articulography, recording movement data from lips and tongue with high temporal and spatial precision. The results showed that with small movement amplitudes there is a decrease in coordination stability, independent from movement duration. These findings were found to be robust across all individuals and are interpreted as further evidence that principles of coupling dynamics operate in the oral motor control system similar to other motor systems and can be explained in terms of coupling

  15. Two is better than one: Physical interactions improve motor performance in humans

    OpenAIRE

    G. Ganesh; A. Takagi; R. Osu; T. Yoshioka; M. Kawato; E. Burdet

    2014-01-01

    How do physical interactions with others change our own motor behavior? Utilizing a novel motor learning paradigm in which the hands of two - individuals are physically connected without their conscious awareness, we investigated how the interaction forces from a partner adapt the motor behavior in physically interacting humans. We observed the motor adaptations during physical interactions to be mutually beneficial such that both the worse and better of the interacting partners improve motor...

  16. A highly penetrant form of childhood apraxia of speech due to deletion of 16p11.2.

    Science.gov (United States)

    Fedorenko, Evelina; Morgan, Angela; Murray, Elizabeth; Cardinaux, Annie; Mei, Cristina; Tager-Flusberg, Helen; Fisher, Simon E; Kanwisher, Nancy

    2016-02-01

    Individuals with heterozygous 16p11.2 deletions reportedly suffer from a variety of difficulties with speech and language. Indeed, recent copy-number variant screens of children with childhood apraxia of speech (CAS), a specific and rare motor speech disorder, have identified three unrelated individuals with 16p11.2 deletions. However, the nature and prevalence of speech and language disorders in general, and CAS in particular, is unknown for individuals with 16p11.2 deletions. Here we took a genotype-first approach, conducting detailed and systematic characterization of speech abilities in a group of 11 unrelated children ascertained on the basis of 16p11.2 deletions. To obtain the most precise and replicable phenotyping, we included tasks that are highly diagnostic for CAS, and we tested children under the age of 18 years, an age group where CAS has been best characterized. Two individuals were largely nonverbal, preventing detailed speech analysis, whereas the remaining nine met the standard accepted diagnostic criteria for CAS. These results link 16p11.2 deletions to a highly penetrant form of CAS. Our findings underline the need for further precise characterization of speech and language profiles in larger groups of affected individuals, which will also enhance our understanding of how genetic pathways contribute to human communication disorders.

  17. High-frequency energy in singing and speech

    Science.gov (United States)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  18. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    Science.gov (United States)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  19. Speech rate in Parkinson's disease: A controlled study.

    Science.gov (United States)

    Martínez-Sánchez, F; Meilán, J J G; Carro, J; Gómez Íñiguez, C; Millian-Morell, L; Pujante Valverde, I M; López-Alburquerque, T; López, D E

    2016-09-01

    Speech disturbances will affect most patients with Parkinson's disease (PD) over the course of the disease. The origin and severity of these symptoms are of clinical and diagnostic interest. To evaluate the clinical pattern of speech impairment in PD patients and identify significant differences in speech rate and articulation compared to control subjects. Speech rate and articulation in a reading task were measured using an automatic analytical method. A total of 39 PD patients in the 'on' state and 45 age-and sex-matched asymptomatic controls participated in the study. None of the patients experienced dyskinesias or motor fluctuations during the test. The patients with PD displayed a significant reduction in speech and articulation rates; there were no significant correlations between the studied speech parameters and patient characteristics such as L-dopa dose, duration of the disorder, age, and UPDRS III scores and Hoehn & Yahr scales. Patients with PD show a characteristic pattern of declining speech rate. These results suggest that in PD, disfluencies are the result of the movement disorder affecting the physiology of speech production systems. Copyright © 2014 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Treating speech subsystems in childhood apraxia of speech with tactual input: the PROMPT approach.

    Science.gov (United States)

    Dale, Philip S; Hayden, Deborah A

    2013-11-01

    Prompts for Restructuring Oral Muscular Phonetic Targets (PROMPT; Hayden, 2004; Hayden, Eigen, Walker, & Olsen, 2010)-a treatment approach for the improvement of speech sound disorders in children-uses tactile-kinesthetic- proprioceptive (TKP) cues to support and shape movements of the oral articulators. No research to date has systematically examined the efficacy of PROMPT for children with childhood apraxia of speech (CAS). Four children (ages 3;6 [years;months] to 4;8), all meeting the American Speech-Language-Hearing Association (2007) criteria for CAS, were treated using PROMPT. All children received 8 weeks of 2 × per week treatment, including at least 4 weeks of full PROMPT treatment that included TKP cues. During the first 4 weeks, 2 of the 4 children received treatment that included all PROMPT components except TKP cues. This design permitted both between-subjects and within-subjects comparisons to evaluate the effect of TKP cues. Gains in treatment were measured by standardized tests and by criterion-referenced measures based on the production of untreated probe words, reflecting change in speech movements and auditory perceptual accuracy. All 4 children made significant gains during treatment, but measures of motor speech control and untreated word probes provided evidence for more gain when TKP cues were included. PROMPT as a whole appears to be effective for treating children with CAS, and the inclusion of TKP cues appears to facilitate greater effect.

  1. Interaction of language processing and motor skill in children with specific language impairment.

    Science.gov (United States)

    DiDonato Brumbach, Andrea C; Goffman, Lisa

    2014-02-01

    To examine how language production interacts with speech motor and gross and fine motor skill in children with specific language impairment (SLI). Eleven children with SLI and 12 age-matched peers (4-6 years) produced structurally primed sentences containing particles and prepositions. Utterances were analyzed for errors and for articulatory duration and variability. Standard measures of motor, language, and articulation skill were also obtained. Sentences containing particles, as compared with prepositions, were less likely to be produced in a priming task and were longer in duration, suggesting increased difficulty with this syntactic structure. Children with SLI demonstrated higher articulatory variability and poorer gross and fine motor skills compared with aged-matched controls. Articulatory variability was correlated with generalized gross and fine motor performance. Children with SLI show co-occurring speech motor and generalized motor deficits. Current theories do not fully account for the present findings, though the procedural deficit hypothesis provides a framework for interpreting overlap among language and motor domains.

  2. Expanding the phenotypic profile of Kleefstra syndrome: A female with low-average intelligence and childhood apraxia of speech.

    Science.gov (United States)

    Samango-Sprouse, Carole; Lawson, Patrick; Sprouse, Courtney; Stapleton, Emily; Sadeghin, Teresa; Gropman, Andrea

    2016-05-01

    Kleefstra syndrome (KS) is a rare neurogenetic disorder most commonly caused by deletion in the 9q34.3 chromosomal region and is associated with intellectual disabilities, severe speech delay, and motor planning deficits. To our knowledge, this is the first patient (PQ, a 6-year-old female) with a 9q34.3 deletion who has near normal intelligence, and developmental dyspraxia with childhood apraxia of speech (CAS). At 6, the Wechsler Preschool and Primary Intelligence testing (WPPSI-III) revealed a Verbal IQ of 81 and Performance IQ of 79. The Beery Buktenica Test of Visual Motor Integration, 5th Edition (VMI) indicated severe visual motor deficits: VMI = 51; Visual Perception = 48; Motor Coordination explanation for the previously reported speech delay and expressive language disorder. Further research is warranted on the impact of CAS on intelligence and behavioral outcome in KS. Therapeutic and prognostic implications are discussed. © 2016 Wiley Periodicals, Inc.

  3. Transcranial direct current stimulation of the primary motor cortex improves word-retrieval in older adults.

    Directory of Open Access Journals (Sweden)

    Marcus eMeinzer

    2014-09-01

    Full Text Available Language facilitation by transcranial direct current stimulation (tDCS in healthy individuals has generated hope that tDCS may also allow improving language impairment after stroke (aphasia. However, current stimulation protocols have yielded variable results and may require identification of residual language cortex using functional magnetic resonance imaging (fMRI, which complicates incorporation into clinical practice. Based on previous behavioral studies that demonstrated improved language processing by motor system pre-activation, the present study assessed whether tDCS administered to the primary motor cortex (M1 can enhance language functions.This proof-of-concept study employed a sham-tDCS controlled, cross-over, within-subject design and assessed the impact of unilateral excitatory (anodal and bihemispheric (dual tDCS in eighteen healthy older adults during semantic word-retrieval and motor speech tasks. Simultaneous fMRI scrutinized the neural mechanisms underlying tDCS effects.Both active tDCS conditions significantly improved word-retrieval compared to sham-tDCS. The direct comparison of activity elicited by word-retrieval vs. motor-speech trials revealed bilateral frontal activity increases during both anodal- and dual-tDCS compared to sham-tDCS. This effect was driven by more pronounced deactivation of frontal regions during the motor-speech task, while activity during word-retrieval trials was unaffected by the stimulation. No effects were found in M1 and secondary motor regions.Our results show that tDCS administered to M1 can improve word-retrieval in healthy individuals, thereby providing a rationale to explore whether M1-tDCS may offer a novel approach to improve language functions in aphasia. fMRI revealed neural facilitation specifically during motor speech trials, which may have reduced switching costs between the overlapping neural systems for lexical retrieval and speech processing, thereby resulting in improved

  4. Transcranial direct current stimulation of the primary motor cortex improves word-retrieval in older adults.

    Science.gov (United States)

    Meinzer, Marcus; Lindenberg, Robert; Sieg, Mira M; Nachtigall, Laura; Ulm, Lena; Flöel, Agnes

    2014-01-01

    Language facilitation by transcranial direct current stimulation (tDCS) in healthy individuals has generated hope that tDCS may also allow improving language impairment after stroke (aphasia). However, current stimulation protocols have yielded variable results and may require identification of residual language cortex using functional magnetic resonance imaging (fMRI), which complicates incorporation into clinical practice. Based on previous behavioral studies that demonstrated improved language processing by motor system pre-activation, the present study assessed whether tDCS administered to the primary motor cortex (M1) can enhance language functions. This proof-of-concept study employed a sham-tDCS controlled, cross-over, within-subject design and assessed the impact of unilateral excitatory (anodal) and bihemispheric (dual) tDCS in 18 healthy older adults during semantic word-retrieval and motor speech tasks. Simultaneous fMRI scrutinized the neural mechanisms underlying tDCS effects. Both active tDCS conditions significantly improved word-retrieval compared to sham-tDCS. The direct comparison of activity elicited by word-retrieval vs. motor-speech trials revealed bilateral frontal activity increases during both anodal- and dual-tDCS compared to sham-tDCS. This effect was driven by more pronounced deactivation of frontal regions during the motor-speech task, while activity during word-retrieval trials was unaffected by the stimulation. No effects were found in M1 and secondary motor regions. Our results show that tDCS administered to M1 can improve word-retrieval in healthy individuals, thereby providing a rationale to explore whether M1-tDCS may offer a novel approach to improve language functions in aphasia. Functional magnetic resonance imaging revealed neural facilitation specifically during motor speech trials, which may have reduced switching costs between the overlapping neural systems for lexical retrieval and speech processing, thereby resulting in

  5. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  6. Motor unit activity after eccentric exercise and muscle damage in humans.

    Science.gov (United States)

    Semmler, J G

    2014-04-01

    It is well known that unaccustomed eccentric exercise leads to muscle damage and soreness, which can produce long-lasting effects on muscle function. How this muscle damage influences muscle activation is poorly understood. The purpose of this brief review is to highlight the effect of eccentric exercise on the activation of muscle by the nervous system, by examining the change in motor unit activity obtained from surface electromyography (EMG) and intramuscular recordings. Previous research shows that eccentric exercise produces unusual changes in the EMG–force relation that influences motor performance during isometric, shortening and lengthening muscle contractions and during fatiguing tasks. When examining the effect of eccentric exercise at the single motor unit level, there are substantial changes in recruitment thresholds, discharge rates, motor unit conduction velocities and synchronization, which can last for up to 1 week after eccentric exercise. Examining the time course of these changes suggests that the increased submaximal EMG after eccentric exercise most likely occurs through a decrease in motor unit conduction velocity and an increase in motor unit activity related to antagonist muscle coactivation and low-frequency fatigue. Furthermore, there is a commonly held view that eccentric exercise produces preferential damage to high-threshold motor units, but the evidence for this in humans is limited. Further research is needed to establish whether there is preferential damage to high-threshold motor units after eccentric exercise in humans, preferably by linking changes in motor unit activity with estimates of motor unit size using selective intramuscular recording techniques.

  7. Cortical Motor Organization, Mirror Neurons, and Embodied Language: An Evolutionary Perspective

    Directory of Open Access Journals (Sweden)

    Leonardo Fogassi

    2012-11-01

    Full Text Available The recent conceptual achievement that the cortical motor system plays a crucial role not only in motor control but also in higher cognitive functions has given a new perspective also on the involvement of motor cortex in language perception and production. In particular, there is evidence that the matching mechanism based on mirror neurons can be involved in both pho-nological recognition and retrieval of meaning, especially for action word categories, thus suggesting a contribution of an action–perception mechanism to the automatic comprehension of semantics. Furthermore, a compari-son of the anatomo-functional properties of the frontal motor cortex among different primates and their communicative modalities indicates that the combination of the voluntary control of the gestural communication systems and of the vocal apparatus has been the critical factor in the transition from a gestural-based communication into a predominantly speech-based system. Finally, considering that the monkey and human premotor-parietal motor system, plus the prefrontal cortex, are involved in the sequential motor organization of actions and in the hierarchical combination of motor elements, we propose that elements of such motor organization have been exploited in other domains, including some aspects of the syntactic structure of language.

  8. Oral motor functions, speech and communication before a definitive diagnosis of amyotrophic lateral sclerosis.

    Science.gov (United States)

    Makkonen, Tanja; Korpijaakko-Huuhka, Anna-Maija; Ruottinen, Hanna; Puhto, Riitta; Hollo, Kirsi; Ylinen, Aarne; Palmio, Johanna

    2016-01-01

    The aim of this study was to explore the cranial nerve symptoms, speech disorders and communicative effectiveness of Finnish patients with diagnosed or possible amyotrophic lateral sclerosis (ALS) at their first assessment by a speech-language pathologist. The group studied consisted of 30 participants who had clinical signs of bulbar deterioration at the beginning of the study. They underwent a thorough clinical speech and communication examination. The cranial nerve symptoms and ability to communicate were compared in 14 participants with probable or definitive ALS and in 16 participants with suspected or possible ALS. The initial type of ALS was also assessed. More deterioration in soft palate function was found in participants with possible ALS than with diagnosed ALS. Likewise, a slower speech rate combined with more severe dysarthria was observed in possible ALS. In both groups, there was some deterioration in communicative effectiveness. In the possible ALS group the diagnostic delay was longer and speech therapy intervention actualized later. The participants with ALS showed multidimensional decline in communication at their first visit to the speech-language pathologist, but impairments and activity limitations were more severe in suspected or possible ALS. The majority of persons with bulbar-onset ALS in this study were in the latter diagnostic group. This suggests that they are more susceptible to delayed diagnosis and delayed speech therapy assessment. It is important to start speech therapy intervention during the diagnostic processes particularly if the person already shows bulbar symptoms. Copyright © 2016. Published by Elsevier Inc.

  9. Speech production gains following constraint-induced movement therapy in children with hemiparesis.

    Science.gov (United States)

    Allison, Kristen M; Reidy, Teressa Garcia; Boyle, Mary; Naber, Erin; Carney, Joan; Pidcock, Frank S

    2017-01-01

    The purpose of this study was to investigate changes in speech skills of children who have hemiparesis and speech impairment after participation in a constraint-induced movement therapy (CIMT) program. While case studies have reported collateral speech gains following CIMT, the effect of CIMT on speech production has not previously been directly investigated to the knowledge of these investigators. Eighteen children with hemiparesis and co-occurring speech impairment participated in a 21-day clinical CIMT program. The Goldman-Fristoe Test of Articulation-2 (GFTA-2) was used to assess children's articulation of speech sounds before and after the intervention. Changes in percent of consonants correct (PCC) on the GFTA-2 were used as a measure of change in speech production. Children made significant gains in PCC following CIMT. Gains were similar in children with left and right-sided hemiparesis, and across age groups. This study reports significant collateral gains in speech production following CIMT and suggests benefits of CIMT may also spread to speech motor domains.

  10. Physiological markers of motor inhibition during human behavior

    Science.gov (United States)

    Duque, Julie; Greenhouse, Ian; Labruna, Ludovica; Ivry, Richard B.

    2017-01-01

    Transcranial magnetic stimulation (TMS) studies in humans have shown that many behaviors engage processes that suppress excitability within the corticospinal tract. Inhibition of the motor output pathway has been extensively studied in the context of action stopping, where a planned movement needs to be abruptly aborted. Recent TMS work has also revealed markers of motor inhibition during the preparation of movement. Here, we review the evidence for motor inhibition during action stopping and action preparation, focusing on studies that have used TMS to monitor changes in the excitability of the corticospinal pathway. We discuss how these physiological results have motivated theoretical models of how the brain selects actions, regulates movement initiation and execution, and switches from one state to another. PMID:28341235

  11. Recent advances in nonlinear speech processing

    CERN Document Server

    Faundez-Zanuy, Marcos; Esposito, Antonietta; Cordasco, Gennaro; Drugman, Thomas; Solé-Casals, Jordi; Morabito, Francesco

    2016-01-01

    This book presents recent advances in nonlinear speech processing beyond nonlinear techniques. It shows that it exploits heuristic and psychological models of human interaction in order to succeed in the implementations of socially believable VUIs and applications for human health and psychological support. The book takes into account the multifunctional role of speech and what is “outside of the box” (see Björn Schuller’s foreword). To this aim, the book is organized in 6 sections, each collecting a small number of short chapters reporting advances “inside” and “outside” themes related to nonlinear speech research. The themes emphasize theoretical and practical issues for modelling socially believable speech interfaces, ranging from efforts to capture the nature of sound changes in linguistic contexts and the timing nature of speech; labors to identify and detect speech features that help in the diagnosis of psychological and neuronal disease, attempts to improve the effectiveness and performa...

  12. Age-related changes in the functional neuroanatomy of overt speech production.

    Science.gov (United States)

    Sörös, Peter; Bose, Arpita; Sokoloff, Lisa Guttman; Graham, Simon J; Stuss, Donald T

    2011-08-01

    Alterations of existing neural networks during healthy aging, resulting in behavioral deficits and changes in brain activity, have been described for cognitive, motor, and sensory functions. To investigate age-related changes in the neural circuitry underlying overt non-lexical speech production, functional MRI was performed in 14 healthy younger (21-32 years) and 14 healthy older individuals (62-84 years). The experimental task involved the acoustically cued overt production of the vowel /a/ and the polysyllabic utterance /pataka/. In younger and older individuals, overt speech production was associated with the activation of a widespread articulo-phonological network, including the primary motor cortex, the supplementary motor area, the cingulate motor areas, and the posterior superior temporal cortex, similar in the /a/ and /pataka/ condition. An analysis of variance with the factors age and condition revealed a significant main effect of age. Irrespective of the experimental condition, significantly greater activation was found in the bilateral posterior superior temporal cortex, the posterior temporal plane, and the transverse temporal gyri in younger compared to older individuals. Significantly greater activation was found in the bilateral middle temporal gyri, medial frontal gyri, middle frontal gyri, and inferior frontal gyri in older vs. younger individuals. The analysis of variance did not reveal a significant main effect of condition and no significant interaction of age and condition. These results suggest a complex reorganization of neural networks dedicated to the production of speech during healthy aging. Copyright © 2009 Elsevier Inc. All rights reserved.

  13. Denouncing Divinity: Blasphemy, Human Rights, and the Struggle of Political Leaders to defend Freedom of Speech in the Case of Innocence of Muslims

    Directory of Open Access Journals (Sweden)

    Tom Herrenberg

    2015-01-01

    Full Text Available This article is about freedom of speech and the political responses to the blasphemous Innocence of Muslims video, which sparked international controversy in the fall of 2012. Politicians from multiple corners of the world spoke out on freedom of speech and its relation to blasphemy. Whereas one might expect that those politicians would abide by international human rights law, many of them issued Statements that unequivocally undermined the principle of free speech enshrined in those human rights instruments. This article discusses a number of these political statements against the background of human rights standards.

  14. ACOUSTIC SPEECH RECOGNITION FOR MARATHI LANGUAGE USING SPHINX

    Directory of Open Access Journals (Sweden)

    Aman Ankit

    2016-09-01

    Full Text Available Speech recognition or speech to text processing, is a process of recognizing human speech by the computer and converting into text. In speech recognition, transcripts are created by taking recordings of speech as audio and their text transcriptions. Speech based applications which include Natural Language Processing (NLP techniques are popular and an active area of research. Input to such applications is in natural language and output is obtained in natural language. Speech recognition mostly revolves around three approaches namely Acoustic phonetic approach, Pattern recognition approach and Artificial intelligence approach. Creation of acoustic model requires a large database of speech and training algorithms. The output of an ASR system is recognition and translation of spoken language into text by computers and computerized devices. ASR today finds enormous application in tasks that require human machine interfaces like, voice dialing, and etc. Our key contribution in this paper is to create corpora for Marathi language and explore the use of Sphinx engine for automatic speech recognition

  15. Attention mechanisms and the mosaic evolution of speech

    Directory of Open Access Journals (Sweden)

    Pedro Tiago Martins

    2014-12-01

    Full Text Available There is still no categorical answer for why humans, and no other species, have speech, or why speech is the way it is. Several purely anatomical arguments have been put forward, but they have been shown to be false, biologically implausible, or of limited scope. This perspective paper supports the idea that evolutionary theories of speech could benefit from a focus on the cognitive mechanisms that make speech possible, for which antecedents in evolutionary history and brain correlates can be found. This type of approach is part of a very recent, but rapidly growing tradition, which has provided crucial insights on the nature of human speech by focusing on the biological bases of vocal learning. Here, we call attention to what might be an important ingredient for speech. We contend that a general mechanism of attention, which manifests itself not only in visual but also auditory (and possibly other modalities, might be one of the key pieces of human speech, in addition to the mechanisms underlying vocal learning, and the pairing of facial gestures with vocalic units.

  16. Two is better than one: Physical interactions improve motor performance in humans

    Science.gov (United States)

    Ganesh, G.; Takagi, A.; Osu, R.; Yoshioka, T.; Kawato, M.; Burdet, E.

    2014-01-01

    How do physical interactions with others change our own motor behavior? Utilizing a novel motor learning paradigm in which the hands of two - individuals are physically connected without their conscious awareness, we investigated how the interaction forces from a partner adapt the motor behavior in physically interacting humans. We observed the motor adaptations during physical interactions to be mutually beneficial such that both the worse and better of the interacting partners improve motor performance during and after interactive practice. We show that these benefits cannot be explained by multi-sensory integration by an individual, but require physical interaction with a reactive partner. Furthermore, the benefits are determined by both the interacting partner's performance and similarity of the partner's behavior to one's own. Our results demonstrate the fundamental neural processes underlying human physical interactions and suggest advantages of interactive paradigms for sport-training and physical rehabilitation.

  17. Speech recognition technology: an outlook for human-to-machine interaction.

    Science.gov (United States)

    Erdel, T; Crooks, S

    2000-01-01

    Speech recognition, as an enabling technology in healthcare-systems computing, is a topic that has been discussed for quite some time, but is just now coming to fruition. Traditionally, speech-recognition software has been constrained by hardware, but improved processors and increased memory capacities are starting to remove some of these limitations. With these barriers removed, companies that create software for the healthcare setting have the opportunity to write more successful applications. Among the criticisms of speech-recognition applications are the high rates of error and steep training curves. However, even in the face of such negative perceptions, there remains significant opportunities for speech recognition to allow healthcare providers and, more specifically, physicians, to work more efficiently and ultimately spend more time with their patients and less time completing necessary documentation. This article will identify opportunities for inclusion of speech-recognition technology in the healthcare setting and examine major categories of speech-recognition software--continuous speech recognition, command and control, and text-to-speech. We will discuss the advantages and disadvantages of each area, the limitations of the software today, and how future trends might affect them.

  18. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans [version 2; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    Oren Poliva

    2016-01-01

    Full Text Available In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS and via the inferior parietal lobe (auditory dorsal stream; ADS. The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food, and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.

  19. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans [version 3; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    Oren Poliva

    2017-09-01

    Full Text Available In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS and via the inferior parietal lobe (auditory dorsal stream; ADS. The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food, and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.

  20. Cognitive aspects of human motor activity: Contribution of right hemisphere and cerebellum

    Directory of Open Access Journals (Sweden)

    Sedov A. S.

    2017-09-01

    Full Text Available Background. Concepts of movement and action are not completely synonymous, but what distinguishes one from the other? Movement may be defined as stimulus- driven motor acts, while action implies realization of a specific motor goal, essential for cognitively driven behavior. Although recent clinical and neuroimaging studies have revealed some areas of the brain that mediate cognitive aspects of human motor behavior, the identification of the basic neural circuit underlying the interaction between cognitive and motor functions remains a challenge for neurophysiology and psychology. Objective. In the current study, we used functional magnetic resonance imaging (fMRI to investigate elementary cognitive aspects of human motor behavior. Design. Twenty healthy right-handed volunteers were asked to perform stimulus-driven and goal-directed movements by clenching the right hand into a fist (7 times. The cognitive component lay in anticipation of simple stimuli signals. In order to disentangle the purely motor component of stimulus-driven movements, we used the event-related (ER paradigm. FMRI was performed on a 3 Tesla Siemens Magnetom Verio MR-scanner with 32-channel head coil. Results. We have shown differences in the localization of brain activity depending on the involvement of cognitive functions. These differences testify to the role of the cerebellum and the right hemisphere in motor cognition. In particular, our results suggest that right associative cortical areas, together with the right posterolateral cerebellum (Crus I and lobule VI and basal ganglia, de ne cognitive control of motor activity, promoting a shift from a stimulus-driven to a goal-directed mode. Conclusion. These results, along with recent data from research on cerebro-cerebellar circuitry, redefine the scope of tasks for exploring the contribution of the cerebellum to diverse aspects of human motor behavior and cognition.

  1. Categorical speech processing in Broca's area: an fMRI study using multivariate pattern-based analysis.

    Science.gov (United States)

    Lee, Yune-Sang; Turkeltaub, Peter; Granger, Richard; Raizada, Rajeev D S

    2012-03-14

    Although much effort has been directed toward understanding the neural basis of speech processing, the neural processes involved in the categorical perception of speech have been relatively less studied, and many questions remain open. In this functional magnetic resonance imaging (fMRI) study, we probed the cortical regions mediating categorical speech perception using an advanced brain-mapping technique, whole-brain multivariate pattern-based analysis (MVPA). Normal healthy human subjects (native English speakers) were scanned while they listened to 10 consonant-vowel syllables along the /ba/-/da/ continuum. Outside of the scanner, individuals' own category boundaries were measured to divide the fMRI data into /ba/ and /da/ conditions per subject. The whole-brain MVPA revealed that Broca's area and the left pre-supplementary motor area evoked distinct neural activity patterns between the two perceptual categories (/ba/ vs /da/). Broca's area was also found when the same analysis was applied to another dataset (Raizada and Poldrack, 2007), which previously yielded the supramarginal gyrus using a univariate adaptation-fMRI paradigm. The consistent MVPA findings from two independent datasets strongly indicate that Broca's area participates in categorical speech perception, with a possible role of translating speech signals into articulatory codes. The difference in results between univariate and multivariate pattern-based analyses of the same data suggest that processes in different cortical areas along the dorsal speech perception stream are distributed on different spatial scales.

  2. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    Science.gov (United States)

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  3. Lexical and phonological variability in preschool children with speech sound disorder.

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A; Lewis, Kerry E

    2014-02-01

    The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.

  4. Test of a motor theory of long-term auditory memory.

    Science.gov (United States)

    Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer

    2012-05-01

    Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.

  5. Spasmodic dysphonia: a laryngeal control disorder specific to speech.

    Science.gov (United States)

    Ludlow, Christy L

    2011-01-19

    Spasmodic dysphonia (SD) is a rare neurological disorder that emerges in middle age, is usually sporadic, and affects intrinsic laryngeal muscle control only during speech. Spasmodic bursts in particular laryngeal muscles disrupt voluntary control during vowel sounds in adductor SD and interfere with voice onset after voiceless consonants in abductor SD. Little is known about its origins; it is classified as a focal dystonia secondary to an unknown neurobiological mechanism that produces a chronic abnormality of laryngeal motor neuron regulation during speech. It develops primarily in females and does not interfere with breathing, crying, laughter, and shouting. Recent postmortem studies have implicated the accumulation of clusters in the parenchyma and perivascular regions with inflammatory changes in the brainstem in one to two cases. A few cases with single mutations in THAP1, a gene involved in transcription regulation, suggest that a weak genetic predisposition may contribute to mechanisms causing a nonprogressive abnormality in laryngeal motor neuron control for speech but not for vocal emotional expression. Research is needed to address the basic cellular and proteomic mechanisms that produce this disorder to provide intervention that could target the pathogenesis of the disorder rather than only providing temporary symptom relief.

  6. A Noninvasive Imaging Approach to Understanding Speech Changes following Deep Brain Stimulation in Parkinson's Disease

    Science.gov (United States)

    Narayana, Shalini; Jacks, Adam; Robin, Donald A.; Poizner, Howard; Zhang, Wei; Franklin, Crystal; Liotti, Mario; Vogel, Deanie; Fox, Peter T.

    2009-01-01

    Purpose: To explore the use of noninvasive functional imaging and "virtual" lesion techniques to study the neural mechanisms underlying motor speech disorders in Parkinson's disease. Here, we report the use of positron emission tomography (PET) and transcranial magnetic stimulation (TMS) to explain exacerbated speech impairment following…

  7. Atypical speech lateralization in adults with developmental coordination disorder demonstrated using functional transcranial Doppler ultrasound

    OpenAIRE

    Hodgson, Jessica C.; Hudson, John M.

    2016-01-01

    Research using clinical populations to explore the relationship between hemispheric speech lateralization and handedness has focused on individuals with speech and language disorders, such as dyslexia or specific language impairment (SLI). Such work reveals atypical patterns of cerebral lateralization and handedness in these groups compared to controls. There are few studies that examine this relationship in people with motor coordination impairments but without speech or reading deficits, wh...

  8. Bidirectional Interference between Speech and Nonspeech Tasks in Younger, Middle-Aged, and Older Adults

    Science.gov (United States)

    Bailey, Dallin J.; Dromey, Christopher

    2015-01-01

    Purpose: The purpose of this study was to examine divided attention over a large age range by looking at the effects of 3 nonspeech tasks on concurrent speech motor performance. The nonspeech tasks were designed to facilitate measurement of bidirectional interference, allowing examination of their sensitivity to speech activity. A cross-sectional…

  9. Prevalence of Speech Disorders in Arak Primary School Students, 2014-2015

    Directory of Open Access Journals (Sweden)

    Abdoreza Yavari

    2016-09-01

    Full Text Available Abstract Background: The speech disorders may produce irreparable damage to childs speech and language development in the psychosocial view. The voice, speech sound production and fluency disorders are speech disorders, that may result from delay or impairment in speech motor control mechanism, central neuron system disorders, improper language stimulation or voice abuse. Materials and Methods: This study examined the prevalence of speech disorders in 1393 Arakian students at 1 to 6th grades of primary school. After collecting continuous speech samples, picture description, passage reading and phonetic test, we recorded the pathological signs of stuttering, articulation disorder and voice disorders in a special sheet. Results: The prevalence of articulation, voice and stuttering disorders was 8%, 3.5% and%1 and the prevalence of speech disorders was 11.9%. The prevalence of speech disorders was decreasing with increasing of student’s grade. 12.2% of boy students and 11.7% of girl students of primary school in Arak had speech disorders. Conclusion: The prevalence of speech disorders of primary school students in Arak is similar to the prevalence of speech disorders in Kermanshah, but the prevalence of speech disorders in this research is smaller than many similar researches in Iran. It seems that racial and cultural diversity has some effect on increasing the prevalence of speech disorders in Arak city.

  10. Is Birdsong More Like Speech or Music?

    Science.gov (United States)

    Shannon, Robert V

    2016-04-01

    Music and speech share many acoustic cues but not all are equally important. For example, harmonic pitch is essential for music but not for speech. When birds communicate is their song more like speech or music? A new study contrasting pitch and spectral patterns shows that birds perceive their song more like humans perceive speech. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  12. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...-Speech Services for Individuals with Hearing and Speech Disabilities, Report and Order (Order), document...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...

  13. Detection of target phonemes in spontaneous and read speech

    NARCIS (Netherlands)

    Mehta, G.; Cutler, A.

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ

  14. Speech and neurology-chemical impairment correlates

    Science.gov (United States)

    Hayre, Harb S.

    2002-05-01

    Speech correlates of alcohol/drug impairment and its neurological basis is presented with suggestion for further research in impairment from poly drug/medicine/inhalent/chew use/abuse, and prediagnosis of many neuro- and endocrin-related disorders. Nerve cells all over the body detect chemical entry by smoking, injection, drinking, chewing, or skin absorption, and transmit neurosignals to their corresponding cerebral subsystems, which in turn affect speech centers-Broca's and Wernick's area, and motor cortex. For instance, gustatory cells in the mouth, cranial and spinal nerve cells in the skin, and cilia/olfactory neurons in the nose are the intake sensing nerve cells. Alcohol depression, and brain cell damage were detected from telephone speech using IMPAIRLYZER-TM, and the results of these studies were presented at 1996 ASA meeting in Indianapolis, and 2001 German Acoustical Society-DEGA conference in Hamburg, Germany respectively. Speech based chemical Impairment measure results were presented at the 2001 meeting of ASA in Chicago. New data on neurotolerance based chemical impairment for alcohol, drugs, and medicine shall be presented, and shown not to fully support NIDA-SAMSHA drug and alcohol threshold used in drug testing domain.

  15. Relationship between oral motor dysfunction and oral bacteria in bedridden elderly.

    Science.gov (United States)

    Tada, Akio; Shiiba, Masashi; Yokoe, Hidetaka; Hanada, Nobuhiro; Tanzawa, Hideki

    2004-08-01

    The purpose of this study was to analyze the relationship between oral bacterial colonization and oral motor dysfunction. Oral motor dysfunction (swallowing and speech disorders) and detection of oral bacterial species from dental plaque in 55 elderly persons who had remained hospitalized for more than 3 months were investigated and statistically analyzed. The detection rates of methicillin-resistant Staphylococcus aureus (MRSA), Pseudomonas aeruginosa, Streptococcus agalactiae, and Stenotrophomonas maltophilia were significantly higher in subjects with than in those without a swallowing disorder. A similar result was found with regard to the presence of a speech disorder. About half of subjects who had oral motor dysfunction and hypoalbuminemia had colonization by MRSA and/or Pseudomonas aeruginosa. These results suggest that the combination of oral motor dysfunction and hypoalbminemia elevated the risk of opportunistic microorganisms colonization in the oral cavity of elderly patients hospitalized over the long term.

  16. Social Robotics in Therapy of Apraxia of Speech

    Directory of Open Access Journals (Sweden)

    José Carlos Castillo

    2018-01-01

    Full Text Available Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction.

  17. Whole-exome sequencing supports genetic heterogeneity in childhood apraxia of speech

    OpenAIRE

    Worthey, Elizabeth A; Raca, Gordana; Laffin, Jennifer J; Wilk, Brandon M; Harris, Jeremy M; Jakielski, Kathy J; Dimmock, David P; Strand, Edythe A; Shriberg, Lawrence D

    2013-01-01

    Background Childhood apraxia of speech (CAS) is a rare, severe, persistent pediatric motor speech disorder with associated deficits in sensorimotor, cognitive, language, learning and affective processes. Among other neurogenetic origins, CAS is the disorder segregating with a mutation in FOXP2 in a widely studied, multigenerational London family. We report the first whole-exome sequencing (WES) findings from a cohort of 10 unrelated participants, ages 3 to 19 years, with well-characterized CA...

  18. Role of the motor system in language knowledge.

    Science.gov (United States)

    Berent, Iris; Brem, Anna-Katharine; Zhao, Xu; Seligson, Erica; Pan, Hong; Epstein, Jane; Stern, Emily; Galaburda, Albert M; Pascual-Leone, Alvaro

    2015-02-17

    All spoken languages express words by sound patterns, and certain patterns (e.g., blog) are systematically preferred to others (e.g., lbog). What principles account for such preferences: does the language system encode abstract rules banning syllables like lbog, or does their dislike reflect the increased motor demands associated with speech production? More generally, we ask whether linguistic knowledge is fully embodied or whether some linguistic principles could potentially be abstract. To address this question, here we gauge the sensitivity of English speakers to the putative universal syllable hierarchy (e.g., blif ≻ bnif ≻ bdif ≻ lbif) while undergoing transcranial magnetic stimulation (TMS) over the cortical motor representation of the left orbicularis oris muscle. If syllable preferences reflect motor simulation, then worse-formed syllables (e.g., lbif) should (i) elicit more errors; (ii) engage more strongly motor brain areas; and (iii) elicit stronger effects of TMS on these motor regions. In line with the motor account, we found that repetitive TMS pulses impaired participants' global sensitivity to the number of syllables, and functional MRI confirmed that the cortical stimulation site was sensitive to the syllable hierarchy. Contrary to the motor account, however, ill-formed syllables were least likely to engage the lip sensorimotor area and they were least impaired by TMS. Results suggest that speech perception automatically triggers motor action, but this effect is not causally linked to the computation of linguistic structure. We conclude that the language and motor systems are intimately linked, yet distinct. Language is designed to optimize motor action, but its knowledge includes principles that are disembodied and potentially abstract.

  19. Role of the motor system in language knowledge

    Science.gov (United States)

    Berent, Iris; Brem, Anna-Katharine; Zhao, Xu; Seligson, Erica; Pan, Hong; Epstein, Jane; Stern, Emily; Galaburda, Albert M.; Pascual-Leone, Alvaro

    2015-01-01

    All spoken languages express words by sound patterns, and certain patterns (e.g., blog) are systematically preferred to others (e.g., lbog). What principles account for such preferences: does the language system encode abstract rules banning syllables like lbog, or does their dislike reflect the increased motor demands associated with speech production? More generally, we ask whether linguistic knowledge is fully embodied or whether some linguistic principles could potentially be abstract. To address this question, here we gauge the sensitivity of English speakers to the putative universal syllable hierarchy (e.g., blif≻bnif≻bdif≻lbif) while undergoing transcranial magnetic stimulation (TMS) over the cortical motor representation of the left orbicularis oris muscle. If syllable preferences reflect motor simulation, then worse-formed syllables (e.g., lbif) should (i) elicit more errors; (ii) engage more strongly motor brain areas; and (iii) elicit stronger effects of TMS on these motor regions. In line with the motor account, we found that repetitive TMS pulses impaired participants’ global sensitivity to the number of syllables, and functional MRI confirmed that the cortical stimulation site was sensitive to the syllable hierarchy. Contrary to the motor account, however, ill-formed syllables were least likely to engage the lip sensorimotor area and they were least impaired by TMS. Results suggest that speech perception automatically triggers motor action, but this effect is not causally linked to the computation of linguistic structure. We conclude that the language and motor systems are intimately linked, yet distinct. Language is designed to optimize motor action, but its knowledge includes principles that are disembodied and potentially abstract. PMID:25646465

  20. Levodopa effects on hand and speech movements in patients with Parkinson's disease: a FMRI study.

    Directory of Open Access Journals (Sweden)

    Audrey Maillet

    Full Text Available Levodopa (L-dopa effects on the cardinal and axial symptoms of Parkinson's disease (PD differ greatly, leading to therapeutic challenges for managing the disabilities in this patient's population. In this context, we studied the cerebral networks associated with the production of a unilateral hand movement, speech production, and a task combining both tasks in 12 individuals with PD, both off and on levodopa (L-dopa. Unilateral hand movements in the off medication state elicited brain activations in motor regions (primary motor cortex, supplementary motor area, premotor cortex, cerebellum, as well as additional areas (anterior cingulate, putamen, associative parietal areas; following L-dopa administration, the brain activation profile was globally reduced, highlighting activations in the parietal and posterior cingulate cortices. For the speech production task, brain activation patterns were similar with and without medication, including the orofacial primary motor cortex (M1, the primary somatosensory cortex and the cerebellar hemispheres bilaterally, as well as the left- premotor, anterior cingulate and supramarginal cortices. For the combined task off L-dopa, the cerebral activation profile was restricted to the right cerebellum (hand movement, reflecting the difficulty in performing two movements simultaneously in PD. Under L-dopa, the brain activation profile of the combined task involved a larger pattern, including additional fronto-parietal activations, without reaching the sum of the areas activated during the simple hand and speech tasks separately. Our results question both the role of the basal ganglia system in speech production and the modulation of task-dependent cerebral networks by dopaminergic treatment.

  1. Synchronization of lower limb motor unit activity during walking in human subjects

    DEFF Research Database (Denmark)

    Hansen, Naja L; Hansen, S; Christensen, L. O. D.

    2001-01-01

    lateralis and medialis of quadriceps), but not or rarely for paired recordings from ankle and knee muscles. The data demonstrate that human motor units within a muscle as well as synergistic muscles acting on the same joint receive a common synaptic drive during human gait. It is speculated that the common...... drive responsible for the motor unit synchronization during gait may be similar to that responsible for short-term synchronization during tonic voluntary contraction....

  2. An investigation and comparison of speech recognition software for determining if bird song recordings contain legible human voices

    Directory of Open Access Journals (Sweden)

    Tim D. Hunt

    Full Text Available The purpose of this work was to test the effectiveness of using readily available speech recognition API services to determine if recordings of bird song had inadvertently recorded human voices. A mobile phone was used to record a human speaking at increasing distances from the phone in an outside setting with bird song occurring in the background. One of the services was trained with sample recordings and each service was compared for their ability to return recognized words. The services from Google and IBM performed similarly and the Microsoft service, that allowed training, performed slightly better. However, all three services failed to perform at a level that would enable recordings with recognizable human speech to be deleted in order to maintain full privacy protection.

  3. A Comparative Study on Motor Skills in 5-Year-Old Children with Phonological and Phonetic Disorders

    Directory of Open Access Journals (Sweden)

    Fatemeh Hasanati

    2011-06-01

    Full Text Available Background and Aim: Speech as a motor phenomenon requires repetitive and rapid function of articulatory organs performing extremely fine movements. Practice on motor skills results in facilitation in treatment progress of children with phonological disorders. The purpose of this study was to compare motor skills in 5-year-old children with phonological and phonetic disorders.Methods: Thirty-two children age 5 years, 16 with phonemical speech sound disorders and 16 with difficulty at a phonetic level participated in this study. TOLD Test was performed for linguistic skills investigation among children. Phonetic test, Wepman test, diadochokinesis and oral assessment was used for diagnosis between phonological and phonetic disorders. The children were also evaluated with Oseretsky motor developmental scale .Results: In comparison, mean scores of movement skills between both groups showed significant difference (p=0.006 and children with phonetic disorder got significantly higher scores on all part of this test.Conclusions: The findings of this study support the idea that speech sound disorders are frequently associated with motor problems, and that type of articulation disorder affects the motor performance in a different way. Phonological disorders seem to have more impact on motor performance than phonetic disorders. The results authenticate the need to pay more attention to the motor skills of children with articulation disorders.

  4. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  5. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension.

    Science.gov (United States)

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-05-01

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  6. Dependence of the paired motor unit analysis on motor unit discharge characteristics in the human tibialis anterior muscle

    Science.gov (United States)

    Stephenson, Jennifer L.; Maluf, Katrina S.

    2011-01-01

    The paired motor unit analysis provides in vivo estimates of the magnitude of persistent inward currents (PIC) in human motoneurons by quantifying changes in the firing rate (ΔF) of an earlier recruited (reference) motor unit at the time of recruitment and derecruitment of a later recruited (test) motor unit. This study assessed the variability of ΔF estimates, and quantified the dependence of ΔF on the discharge characteristics of the motor units selected for analysis. ΔF was calculated for 158 pairs of motor units recorded from nine healthy individuals during repeated submaximal contractions of the tibialis anterior muscle. The mean (SD) ΔF was 3.7 (2.5) pps (range −4.2 to 8.9 pps). The median absolute difference in ΔF for the same motor unit pair across trials was 1.8 pps, and the minimal detectable change in ΔF required to exceed measurement error was 4.8 pps. ΔF was positively related to the amount of discharge rate modulation in the reference motor unit (r2=0.335; Precruitment of the reference and test motor units (r2=0.229, Pmotor unit activity (r2=0.110, Precruitment threshold of the test motor unit (r2=0.237, Pmotor unit analysis. PMID:21459110

  7. Motor Development and Motor Resonance Difficulties in Autism: Relevance to Early Intervention for Language and Communication Skills

    Directory of Open Access Journals (Sweden)

    Joseph P. Mccleery

    2013-04-01

    Full Text Available Research suggests that a sub-set of children with autism experience notable difficulties and delays in motor skills development, and that a large percentage of children with autism experience deficits in motor resonance. These motor-related deficiencies, which evidence suggests are present from a very early age, are likely to negatively affect social-communicative and language development in this population. Here, we review evidence for delayed, impaired, and atypical motor development in infants and children with autism. We then carefully review and examine the current language and communication-based intervention research that is relevant to motor and motor resonance (i.e., neural mirroring mechanisms activated when we observe the actions of others deficits in children with autism. Finally, we describe research needs and future directions and developments for early interventions aimed at addressing the speech/language and social-communication development difficulties in autism from a motor-related perspective.

  8. Motor development and motor resonance difficulties in autism: relevance to early intervention for language and communication skills

    Science.gov (United States)

    McCleery, Joseph P.; Elliott, Natasha A.; Sampanis, Dimitrios S.; Stefanidou, Chrysi A.

    2013-01-01

    Research suggests that a sub-set of children with autism experience notable difficulties and delays in motor skills development, and that a large percentage of children with autism experience deficits in motor resonance. These motor-related deficiencies, which evidence suggests are present from a very early age, are likely to negatively affect social-communicative and language development in this population. Here, we review evidence for delayed, impaired, and atypical motor development in infants and children with autism. We then carefully review and examine the current language and communication-based intervention research that is relevant to motor and motor resonance (i.e., neural “mirroring” mechanisms activated when we observe the actions of others) deficits in children with autism. Finally, we describe research needs and future directions and developments for early interventions aimed at addressing the speech/language and social-communication development difficulties in autism from a motor-related perspective. PMID:23630476

  9. EVOLUTION OF SPEECH: A NEW HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    Shishir

    2016-03-01

    Full Text Available BACKGROUND The first and foremost characteristic of speech is that it is human. Speech is one characteristic feature that has evolved in humans and is by far the most powerful form of communication in the Kingdom Animalia. Today, human has established himself as an alpha species and speech and language evolution has made it possible. But how is speech possible? What anatomical changes have made us possible to speak? A sincere effort has been put in this paper to establish a possible anatomical answer to the riddle. METHODS The prototypes of the cranial skeletons of all the major classes of phylum vertebrata were studied. The materials were studied in museums of Wayanad, Karwar and Museum of Natural History, Imphal. The skeleton of mammal was studied in the Department of Anatomy, K. S. Hegde Medical Academy, Mangalore. RESULTS The curve formed in the base of the skull due to flexion of the splanchnocranium with the neurocranium holds the key to answer of how humans were able to speak. CONCLUSION Of course this may not be the only reason which participated in the evolution of speech like the brain also had to evolve and as a matter of fact the occipital lobes are more prominent in humans when compared to that of the lower mammals. Although, not the only criteria but it is one of the most important thing that has happened in the course of evolution and made us to speak. This small space at the base of the brain is the difference which made us the dominant alpha species.

  10. Motor unit recruitment in human genioglossus muscle in response to hypercapnia.

    Science.gov (United States)

    Nicholas, Christian L; Bei, Bei; Worsnop, Christopher; Malhotra, Atul; Jordan, Amy S; Saboisky, Julian P; Chan, Julia K M; Duckworth, Ella; White, David P; Trinder, John

    2010-11-01

    single motor unit recordings of the genioglossus (GG) muscle indicate that GG motor units have a variety of discharge patterns, including units that have higher discharge rates during inspiration (inspiratory phasic and inspiratory tonic), or expiration (expiratory phasic and expiratory tonic), or do not modify their rate with respiration (tonic). Previous studies have shown that an increase in GG muscle activity is a consequence of increased activity in inspiratory units. However, there are differences between studies as to whether this increase is primarily due to recruitment of new motor units (motor unit recruitment) or to increased discharge rate of already active units (rate coding). Sleep-wake state studies in humans have suggested the former, while hypercapnia experiments in rats have suggested the latter. In this study, we investigated the effect of hypercapnia on GG motor unit activity in humans during wakefulness. sleep research laboratory. sixteen healthy men. each participant was administered at least 6 trials with P(et)CO(2) being elevated 8.4 (SD = 1.96) mm Hg over 2 min following a 30-s baseline. Subjects were instrumented for GG EMG and respiratory measurements with 4 fine wire electrodes inserted subcutaneously into the muscle. One hundred forty-one motor units were identified during the baseline: 47% were inspiratory modulated, 29% expiratory modulated, and 24% showed no respiratory related modulation. Sixty-two new units were recruited during hypercapnia. The distribution of recruited units was significantly different from the baseline distribution, with 84% being inspiratory modulated (P units active during baseline, nor new units recruited during hypercapnia, increased their discharge rate as P(et)CO(2) increased (P > 0.05 for all comparisons). increased GG muscle activity in humans occurs because of recruitment of previously inactive inspiratory modulated units.

  11. Intonational speech prosody encoding in the human auditory cortex.

    Science.gov (United States)

    Tang, C; Hamilton, L S; Chang, E F

    2017-08-25

    Speakers of all human languages regularly use intonational pitch to convey linguistic meaning, such as to emphasize a particular word. Listeners extract pitch movements from speech and evaluate the shape of intonation contours independent of each speaker's pitch range. We used high-density electrocorticography to record neural population activity directly from the brain surface while participants listened to sentences that varied in intonational pitch contour, phonetic content, and speaker. Cortical activity at single electrodes over the human superior temporal gyrus selectively represented intonation contours. These electrodes were intermixed with, yet functionally distinct from, sites that encoded different information about phonetic features or speaker identity. Furthermore, the representation of intonation contours directly reflected the encoding of speaker-normalized relative pitch but not absolute pitch. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  12. Normal Aspects of Speech, Hearing, and Language.

    Science.gov (United States)

    Minifie, Fred. D., Ed.; And Others

    This book is written as a guide to the understanding of the processes involved in human speech communication. Ten authorities contributed material to provide an introduction to the physiological aspects of speech production and reception, the acoustical aspects of speech production and transmission, the psychophysics of sound reception, the nature…

  13. Body Topography Parcellates Human Sensory and Motor Cortex.

    Science.gov (United States)

    Kuehn, Esther; Dinse, Juliane; Jakobsen, Estrid; Long, Xiangyu; Schäfer, Andreas; Bazin, Pierre-Louis; Villringer, Arno; Sereno, Martin I; Margulies, Daniel S

    2017-07-01

    The cytoarchitectonic map as proposed by Brodmann currently dominates models of human sensorimotor cortical structure, function, and plasticity. According to this model, primary motor cortex, area 4, and primary somatosensory cortex, area 3b, are homogenous areas, with the major division lying between the two. Accumulating empirical and theoretical evidence, however, has begun to question the validity of the Brodmann map for various cortical areas. Here, we combined in vivo cortical myelin mapping with functional connectivity analyses and topographic mapping techniques to reassess the validity of the Brodmann map in human primary sensorimotor cortex. We provide empirical evidence that area 4 and area 3b are not homogenous, but are subdivided into distinct cortical fields, each representing a major body part (the hand and the face). Myelin reductions at the hand-face borders are cortical layer-specific, and coincide with intrinsic functional connectivity borders as defined using large-scale resting state analyses. Our data extend the Brodmann model in human sensorimotor cortex and suggest that body parts are an important organizing principle, similar to the distinction between sensory and motor processing. © The Author 2017. Published by Oxford University Press.

  14. The primary motor and premotor areas of the human cerebral cortex.

    Science.gov (United States)

    Chouinard, Philippe A; Paus, Tomás

    2006-04-01

    Brodmann's cytoarchitectonic map of the human cortex designates area 4 as cortex in the anterior bank of the precentral sulcus and area 6 as cortex encompassing the precentral gyrus and the posterior portion of the superior frontal gyrus on both the lateral and medial surfaces of the brain. More than 70 years ago, Fulton proposed a functional distinction between these two areas, coining the terms primary motor area for cortex in Brodmann area 4 and premotor area for cortex in Brodmann area 6. The parcellation of the cortical motor system has subsequently become more complex. Several nonprimary motor areas have been identified in the brain of the macaque monkey, and associations between anatomy and function in the human brain are being tested continuously using brain mapping techniques. In the present review, the authors discuss the unique properties of the primary motor area (M1), the dorsal portion of the premotor cortex (PMd), and the ventral portion of the premotor cortex (PMv). They end this review by discussing how the premotor areas influence M1.

  15. Acquisition and improvement of human motor skills: Learning through observation and practice

    Science.gov (United States)

    Iba, Wayne

    1991-01-01

    Skilled movement is an integral part of the human existence. A better understanding of motor skills and their development is a prerequisite to the construction of truly flexible intelligent agents. We present MAEANDER, a computational model of human motor behavior, that uniformly addresses both the acquisition of skills through observation and the improvement of skills through practice. MAEANDER consists of a sensory-effector interface, a memory of movements, and a set of performance and learning mechanisms that let it recognize and generate motor skills. The system initially acquires such skills by observing movements performed by another agent and constructing a concept hierarchy. Given a stored motor skill in memory, MAEANDER will cause an effector to behave appropriately. All learning involves changing the hierarchical memory of skill concepts to more closely correspond to either observed experience or to desired behaviors. We evaluated MAEANDER empirically with respect to how well it acquires and improves both artificial movement types and handwritten script letters from the alphabet. We also evaluate MAEANDER as a psychological model by comparing its behavior to robust phenomena in humans and by considering the richness of the predictions it makes.

  16. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  17. TongueToSpeech (TTS): Wearable wireless assistive device for augmented speech.

    Science.gov (United States)

    Marjanovic, Nicholas; Piccinini, Giacomo; Kerr, Kevin; Esmailbeigi, Hananeh

    2017-07-01

    Speech is an important aspect of human communication; individuals with speech impairment are unable to communicate vocally in real time. Our team has developed the TongueToSpeech (TTS) device with the goal of augmenting speech communication for the vocally impaired. The proposed device is a wearable wireless assistive device that incorporates a capacitive touch keyboard interface embedded inside a discrete retainer. This device connects to a computer, tablet or a smartphone via Bluetooth connection. The developed TTS application converts text typed by the tongue into audible speech. Our studies have concluded that an 8-contact point configuration between the tongue and the TTS device would yield the best user precision and speed performance. On average using the TTS device inside the oral cavity takes 2.5 times longer than the pointer finger using a T9 (Text on 9 keys) keyboard configuration to type the same phrase. In conclusion, we have developed a discrete noninvasive wearable device that allows the vocally impaired individuals to communicate in real time.

  18. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. The challenges of dysphagia in treating motor neurone disease.

    Science.gov (United States)

    Vesey, Siobhan

    2017-07-01

    Motor neurone disease (MND) is a relatively rare degenerative disorder. Its impacts are manifested in progressive loss of motor function and often accompanied by wider non-motor changes. Swallowing and speech abilities are frequently severely impaired. Effective management of dysphagia (swallowing difficulty) symptoms and nutritional care requires a holistic multidisciplinary approach. Care must be patient focused, facilitate patient decision making, and support planning towards end of life care. This article discusses the challenges of providing effective nutritional care to people living with motor neurone disease who have dysphagia.

  20. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  1. Spasmodic Dysphonia: a Laryngeal Control Disorder Specific to Speech

    Science.gov (United States)

    Ludlow, Christy L.

    2016-01-01

    Spasmodic dysphonia (SD) is a rare neurological disorder that emerges in middle age, is usually sporadic, and affects intrinsic laryngeal muscle control only during speech. Spasmodic bursts in particular laryngeal muscles disrupt voluntary control during vowel sounds in adductor SD and interfere with voice onset after voiceless consonants in abductor SD. Little is known about its origins; it is classified as a focal dystonia secondary to an unknown neurobiological mechanism that produces a chronic abnormality of laryngeal motor neuron regulation during speech. It develops primarily in females and does not interfere with breathing, crying, laughter, and shouting. Recent postmortem studies have implicated the accumulation of clusters in the parenchyma and perivascular regions with inflammatory changes in the brainstem in one to two cases. A few cases with single mutations in THAP1, a gene involved in transcription regulation, suggest that a weak genetic predisposition may contribute to mechanisms causing a nonprogressive abnormality in laryngeal motor neuron control for speech but not for vocal emotional expression. Research is needed to address the basic cellular and proteomic mechanisms that produce this disorder to provide intervention that could target the pathogenesis of the disorder rather than only providing temporary symptom relief. PMID:21248101

  2. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  3. Speech-specific tuning of neurons in human superior temporal gyrus.

    Science.gov (United States)

    Chan, Alexander M; Dykstra, Andrew R; Jayaram, Vinay; Leonard, Matthew K; Travis, Katherine E; Gygi, Brian; Baker, Janet M; Eskandar, Emad; Hochberg, Leigh R; Halgren, Eric; Cash, Sydney S

    2014-10-01

    How the brain extracts words from auditory signals is an unanswered question. We recorded approximately 150 single and multi-units from the left anterior superior temporal gyrus of a patient during multiple auditory experiments. Against low background activity, 45% of units robustly fired to particular spoken words with little or no response to pure tones, noise-vocoded speech, or environmental sounds. Many units were tuned to complex but specific sets of phonemes, which were influenced by local context but invariant to speaker, and suppressed during self-produced speech. The firing of several units to specific visual letters was correlated with their response to the corresponding auditory phonemes, providing the first direct neural evidence for phonological recoding during reading. Maximal decoding of individual phonemes and words identities was attained using firing rates from approximately 5 neurons within 200 ms after word onset. Thus, neurons in human superior temporal gyrus use sparse spatially organized population encoding of complex acoustic-phonetic features to help recognize auditory and visual words. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Speech Recognition for the iCub Platform

    Directory of Open Access Journals (Sweden)

    Bertrand Higy

    2018-02-01

    Full Text Available This paper describes open source software (available at https://github.com/robotology/natural-speech to build automatic speech recognition (ASR systems and run them within the YARP platform. The toolkit is designed (i to allow non-ASR experts to easily create their own ASR system and run it on iCub and (ii to build deep learning-based models specifically addressing the main challenges an ASR system faces in the context of verbal human–iCub interactions. The toolkit mostly consists of Python, C++ code and shell scripts integrated in YARP. As additional contribution, a second codebase (written in Matlab is provided for more expert ASR users who want to experiment with bio-inspired and developmental learning-inspired ASR systems. Specifically, we provide code for two distinct kinds of speech recognition: “articulatory” and “unsupervised” speech recognition. The first is largely inspired by influential neurobiological theories of speech perception which assume speech perception to be mediated by brain motor cortex activities. Our articulatory systems have been shown to outperform strong deep learning-based baselines. The second type of recognition systems, the “unsupervised” systems, do not use any supervised information (contrary to most ASR systems, including our articulatory systems. To some extent, they mimic an infant who has to discover the basic speech units of a language by herself. In addition, we provide resources consisting of pre-trained deep learning models for ASR, and a 2.5-h speech dataset of spoken commands, the VoCub dataset, which can be used to adapt an ASR system to the typical acoustic environments in which iCub operates.

  5. Catecholaminergic consolidation of motor cortical neuroplasticity in humans.

    Science.gov (United States)

    Nitsche, Michael A; Grundey, Jessica; Liebetanz, David; Lang, Nicolas; Tergau, Frithjof; Paulus, Walter

    2004-11-01

    Amphetamine, a catecholaminergic re-uptake-blocker, is able to improve neuroplastic mechanisms in humans. However, so far not much is known about the underlying physiological mechanisms. Here, we study the impact of amphetamine on NMDA receptor-dependent long-lasting excitability modifications in the human motor cortex elicited by weak transcranial direct current stimulation (tDCS). Amphetamine significantly enhanced and prolonged increases in anodal, tDCS-induced, long-lasting excitability. Under amphetamine premedication, anodal tDCS resulted in an enhancement of excitability which lasted until the morning after tDCS, compared to approximately 1 h in the placebo condition. Prolongation of the excitability enhancement was most pronounced for long-term effects; the duration of short-term excitability enhancement was only slightly increased. Since the additional application of the NMDA receptor antagonist dextromethorphane blocked any enhancement of tDCS-driven excitability under amphetamine, we conclude that amphetamine consolidates the tDCS-induced neuroplastic effects, but does not initiate them. The fact that propanolol, a beta-adrenergic antagonist, diminished the duration of the tDCS-generated after-effects suggests that adrenergic receptors play a certain role in the consolidation of NMDA receptor-dependent motor cortical excitability modifications in humans. This result may enable researchers to optimize neuroplastic processes in the human brain on the rational basis of purpose-designed pharmacological interventions.

  6. Correlational Analysis of Speech Intelligibility Tests and Metrics for Speech Transmission

    Science.gov (United States)

    2017-12-04

    sounds, are more prone to masking than the high-energy, wide-spectrum vowels. Such contaminated speech is still audible but not clear. Thus, speech...Science; 2012 June 12–14; Kuala Lumpur ( Malaysia ): New York (NY): IEEE; c2012. p. 676–682. Approved for public release; distribution is unlimited. 47...ARRABITO 1 UNIV OF COLORADO (PDF) K AREHART 1 NASA (PDF) J ALLEN 1 FOOD AND DRUG ADM-DEPT (PDF) OF HEALTH AND HUMAN SERVICES

  7. Reality Monitoring and Feedback Control of Speech Production Are Related Through Self-Agency.

    Science.gov (United States)

    Subramaniam, Karuna; Kothare, Hardik; Mizuiri, Danielle; Nagarajan, Srikantan S; Houde, John F

    2018-01-01

    Self-agency is the experience of being the agent of one's own thoughts and motor actions. The intact experience of self-agency is necessary for successful interactions with the outside world (i.e., reality monitoring) and for responding to sensory feedback of our motor actions (e.g., speech feedback control). Reality monitoring is the ability to distinguish internally self-generated information from outside reality (externally-derived information). In the present study, we examined the relationship of self-agency between lower-level speech feedback monitoring (i.e., monitoring what we hear ourselves say) and a higher-level cognitive reality monitoring task. In particular, we examined whether speech feedback monitoring and reality monitoring were driven by the capacity to experience self-agency-the ability to make reliable predictions about the outcomes of self-generated actions. During the reality monitoring task, subjects made judgments as to whether information was previously self-generated (self-agency judgments) or externally derived (external-agency judgments). During speech feedback monitoring, we assessed self-agency by altering environmental auditory feedback so that subjects listened to a perturbed version of their own speech. When subjects heard minimal perturbations in their auditory feedback while speaking, they made corrective responses, indicating that they judged the perturbations as errors in their speech output. We found that self-agency judgments in the reality-monitoring task were higher in people who had smaller corrective responses ( p = 0.05) and smaller inter-trial variability ( p = 0.03) during minimal pitch perturbations of their auditory feedback. These results provide support for a unitary process for the experience of self-agency governing low-level speech control and higher level reality monitoring.

  8. [The rehabilitation treatment of patients with motor and cognitive disorders after stroke].

    Science.gov (United States)

    Sakharov, V Iu; Isanova, V A

    2014-01-01

    Objective. To study the possibility of using the rehabilitative pneumatic suit "Atlant" in stroke outpatients. Material and methods. We studied 11 stroke patients who wore the pneumatic suit in the early rehabilitation period. A comparison group included 13 patients. The high effectiveness of complex treatment with using the suit "Atlant" was shown. The motor activity was improved in 71.4% of patients, the recovery of speech was found in 33.3% patients. Conclusion. Continuity of rehabilitation in outpatients with stroke promotes the recovery of functional activity, motor, cognitive and speech functions and positively impacts on the emotional state of the patient.

  9. Functional resting-state connectivity of the human motor network: differences between right- and left-handers.

    Science.gov (United States)

    Pool, Eva-Maria; Rehme, Anne K; Eickhoff, Simon B; Fink, Gereon R; Grefkes, Christian

    2015-04-01

    Handedness is associated with differences in activation levels in various motor tasks performed with the dominant or non-dominant hand. Here we tested whether handedness is reflected in the functional architecture of the motor system even in the absence of an overt motor task. Using resting-state functional magnetic resonance imaging we investigated 18 right- and 18 left-handers. Whole-brain functional connectivity maps of the primary motor cortex (M1), supplementary motor area (SMA), dorsolateral premotor cortex (PMd), pre-SMA, inferior frontal junction and motor putamen were compared between right- and left-handers. We further used a multivariate linear support vector machine (SVM) classifier to reveal the specificity of brain regions for classifying handedness based on individual resting-state maps. Using left M1 as seed region, functional connectivity analysis revealed stronger interhemispheric functional connectivity between left M1 and right PMd in right-handers as compared to left-handers. This connectivity cluster contributed to the individual classification of right- and left-handers with 86.2% accuracy. Consistently, also seeding from right PMd yielded a similar handedness-dependent effect in left M1, albeit with lower classification accuracy (78.1%). Control analyses of the other resting-state networks including the speech and the visual network revealed no significant differences in functional connectivity related to handedness. In conclusion, our data revealed an intrinsically higher functional connectivity in right-handers. These results may help to explain that hand preference is more lateralized in right-handers than in left-handers. Furthermore, enhanced functional connectivity between left M1 and right PMd may serve as an individual marker of handedness. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Feedback Frequency in Treatment for Childhood Apraxia of Speech

    Science.gov (United States)

    Maas, Edwin; Butalla, Christine E.; Farinella, Kimberly A.

    2012-01-01

    Purpose: To examine the role of feedback frequency in treatment for childhood apraxia of speech (CAS). Reducing the frequency of feedback enhances motor learning, and recently, such feedback frequency reductions have been recommended for the treatment of CAS. However, no published studies have explicitly compared different feedback frequencies in…

  11. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  12. Damage to the Left Precentral Gyrus Is Associated With Apraxia of Speech in Acute Stroke.

    Science.gov (United States)

    Itabashi, Ryo; Nishio, Yoshiyuki; Kataoka, Yuka; Yazawa, Yukako; Furui, Eisuke; Matsuda, Minoru; Mori, Etsuro

    2016-01-01

    Apraxia of speech (AOS) is a motor speech disorder, which is clinically characterized by the combination of phonemic segmental changes and articulatory distortions. AOS has been believed to arise from impairment in motor speech planning/programming and differentiated from both aphasia and dysarthria. The brain regions associated with AOS are still a matter of debate. The aim of this study was to address this issue in a large number of consecutive acute ischemic stroke patients. We retrospectively studied 136 patients with isolated nonlacunar infarcts in the left middle cerebral artery territory (70.5±12.9 years old, 79 males). In accordance with speech and language assessments, the patients were classified into the following groups: pure form of AOS (pure AOS), AOS with aphasia (AOS-aphasia), and without AOS (non-AOS). Voxel-based lesion-symptom mapping analysis was performed on T2-weighted images or fluid-attenuated inversion recovery images. Using the Liebermeister method, group-wise comparisons were made between the all AOS (pure AOS plus AOS-aphasia) and non-AOS, pure AOS and non-AOS, AOS-aphasia and non-AOS, and pure AOS and AOS-aphasia groups. Of the 136 patients, 22 patients were diagnosed with AOS (7 patients with pure AOS and 15 patients with AOS-aphasia). The voxel-based lesion-symptom mapping analysis demonstrated that the brain regions associated with AOS were centered on the left precentral gyrus. Damage to the left precentral gyrus is associated with AOS in acute to subacute stroke patients, suggesting a role of this brain region in motor speech production. © 2015 American Heart Association, Inc.

  13. Detection of target phonemes in spontaneous and read speech

    OpenAIRE

    Mehta, G.; Cutler, A.

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonem...

  14. On speech recognition during anaesthesia

    DEFF Research Database (Denmark)

    Alapetite, Alexandre

    2007-01-01

    This PhD thesis in human-computer interfaces (informatics) studies the case of the anaesthesia record used during medical operations and the possibility to supplement it with speech recognition facilities. Problems and limitations have been identified with the traditional paper-based anaesthesia...... and inaccuracies in the anaesthesia record. Supplementing the electronic anaesthesia record interface with speech input facilities is proposed as one possible solution to a part of the problem. The testing of the various hypotheses has involved the development of a prototype of an electronic anaesthesia record...... interface with speech input facilities in Danish. The evaluation of the new interface was carried out in a full-scale anaesthesia simulator. This has been complemented by laboratory experiments on several aspects of speech recognition for this type of use, e.g. the effects of noise on speech recognition...

  15. Altered resting-state network connectivity in stroke patients with and without apraxia of speech.

    Science.gov (United States)

    New, Anneliese B; Robin, Donald A; Parkinson, Amy L; Duffy, Joseph R; McNeil, Malcom R; Piguet, Olivier; Hornberger, Michael; Price, Cathy J; Eickhoff, Simon B; Ballard, Kirrie J

    2015-01-01

    Motor speech disorders, including apraxia of speech (AOS), account for over 50% of the communication disorders following stroke. Given its prevalence and impact, and the need to understand its neural mechanisms, we used resting state functional MRI to examine functional connectivity within a network of regions previously hypothesized as being associated with AOS (bilateral anterior insula (aINS), inferior frontal gyrus (IFG), and ventral premotor cortex (PM)) in a group of 32 left hemisphere stroke patients and 18 healthy, age-matched controls. Two expert clinicians rated severity of AOS, dysarthria and nonverbal oral apraxia of the patients. Fifteen individuals were categorized as AOS and 17 were AOS-absent. Comparison of connectivity in patients with and without AOS demonstrated that AOS patients had reduced connectivity between bilateral PM, and this reduction correlated with the severity of AOS impairment. In addition, AOS patients had negative connectivity between the left PM and right aINS and this effect decreased with increasing severity of non-verbal oral apraxia. These results highlight left PM involvement in AOS, begin to differentiate its neural mechanisms from those of other motor impairments following stroke, and help inform us of the neural mechanisms driving differences in speech motor planning and programming impairment following stroke.

  16. Altered resting-state network connectivity in stroke patients with and without apraxia of speech

    Directory of Open Access Journals (Sweden)

    Anneliese B. New

    2015-01-01

    Full Text Available Motor speech disorders, including apraxia of speech (AOS, account for over 50% of the communication disorders following stroke. Given its prevalence and impact, and the need to understand its neural mechanisms, we used resting state functional MRI to examine functional connectivity within a network of regions previously hypothesized as being associated with AOS (bilateral anterior insula (aINS, inferior frontal gyrus (IFG, and ventral premotor cortex (PM in a group of 32 left hemisphere stroke patients and 18 healthy, age-matched controls. Two expert clinicians rated severity of AOS, dysarthria and nonverbal oral apraxia of the patients. Fifteen individuals were categorized as AOS and 17 were AOS-absent. Comparison of connectivity in patients with and without AOS demonstrated that AOS patients had reduced connectivity between bilateral PM, and this reduction correlated with the severity of AOS impairment. In addition, AOS patients had negative connectivity between the left PM and right aINS and this effect decreased with increasing severity of non-verbal oral apraxia. These results highlight left PM involvement in AOS, begin to differentiate its neural mechanisms from those of other motor impairments following stroke, and help inform us of the neural mechanisms driving differences in speech motor planning and programming impairment following stroke.

  17. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  18. Evaluation of Short-Term Cepstral Based Features for Detection of Parkinson’s Disease Severity Levels through Speech signals

    Science.gov (United States)

    Oung, Qi Wei; Nisha Basah, Shafriza; Muthusamy, Hariharan; Vijean, Vikneswaran; Lee, Hoileong

    2018-03-01

    Parkinson’s disease (PD) is one type of progressive neurodegenerative disease known as motor system syndrome, which is due to the death of dopamine-generating cells, a region of the human midbrain. PD normally affects people over 60 years of age, which at present has influenced a huge part of worldwide population. Lately, many researches have shown interest into the connection between PD and speech disorders. Researches have revealed that speech signals may be a suitable biomarker for distinguishing between people with Parkinson’s (PWP) from healthy subjects. Therefore, early diagnosis of PD through the speech signals can be considered for this aim. In this research, the speech data are acquired based on speech behaviour as the biomarker for differentiating PD severity levels (mild and moderate) from healthy subjects. Feature extraction algorithms applied are Mel Frequency Cepstral Coefficients (MFCC), Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC), and Weighted Linear Prediction Cepstral Coefficients (WLPCC). For classification, two types of classifiers are used: k-Nearest Neighbour (KNN) and Probabilistic Neural Network (PNN). The experimental results demonstrated that PNN classifier and KNN classifier achieve the best average classification performance of 92.63% and 88.56% respectively through 10-fold cross-validation measures. Favourably, the suggested techniques have the possibilities of becoming a new choice of promising tools for the PD detection with tremendous performance.

  19. Interaction matters: A perceived social partner alters the neural processing of human speech.

    Science.gov (United States)

    Rice, Katherine; Redcay, Elizabeth

    2016-04-01

    Mounting evidence suggests that social interaction changes how communicative behaviors (e.g., spoken language, gaze) are processed, but the precise neural bases by which social-interactive context may alter communication remain unknown. Various perspectives suggest that live interactions are more rewarding, more attention-grabbing, or require increased mentalizing-thinking about the thoughts of others. Dissociating between these possibilities is difficult because most extant neuroimaging paradigms examining social interaction have not directly compared live paradigms to conventional "offline" (or recorded) paradigms. We developed a novel fMRI paradigm to assess whether and how an interactive context changes the processing of speech matched in content and vocal characteristics. Participants listened to short vignettes--which contained no reference to people or mental states--believing that some vignettes were prerecorded and that others were presented over a real-time audio-feed by a live social partner. In actuality, all speech was prerecorded. Simply believing that speech was live increased activation in each participant's own mentalizing regions, defined using a functional localizer. Contrasting live to recorded speech did not reveal significant differences in attention or reward regions. Further, higher levels of autistic-like traits were associated with altered neural specialization for live interaction. These results suggest that humans engage in ongoing mentalizing about social partners, even when such mentalizing is not explicitly required, illustrating how social context shapes social cognition. Understanding communication in social context has important implications for typical and atypical social processing, especially for disorders like autism where social difficulties are more acute in live interaction. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia.

    Science.gov (United States)

    Fridriksson, Julius; Guo, Dazhou; Fillmore, Paul; Holland, Audrey; Rorden, Chris

    2013-11-01

    , the two most common kinds of non-fluent aphasia. In summary, the current results suggest that the anterior segment of the left arcuate fasciculus, a white matter tract that lies deep to posterior portions of Broca's area and the sensory-motor cortex, is a robust predictor of impaired speech fluency in aphasic patients, even when motor speech, lexical processing, and executive functioning are included as co-factors. Simply put, damage to those regions results in non-fluent aphasic speech; when they are undamaged, fluent aphasias result.

  1. Functional magnetic resonance imaging exploration of combined hand and speech movements in Parkinson's disease.

    Science.gov (United States)

    Pinto, Serge; Mancini, Laura; Jahanshahi, Marjan; Thornton, John S; Tripoliti, Elina; Yousry, Tarek A; Limousin, Patricia

    2011-10-01

    Among the repertoire of motor functions, although hand movement and speech production tasks have been investigated widely by functional neuroimaging, paradigms combining both movements have been studied less so. Such paradigms are of particular interest in Parkinson's disease, in which patients have specific difficulties performing two movements simultaneously. In 9 unmedicated patients with Parkinson's disease and 15 healthy control subjects, externally cued tasks (i.e., hand movement, speech production, and combined hand movement and speech production) were performed twice in a random order and functional magnetic resonance imaging detected cerebral activations, compared to the rest. F-statistics tested within-group (significant activations at P values 10 voxels). For control subjects, the combined task activations comprised the sum of those obtained during hand movement and speech production performed separately, reflecting the neural correlates of performing movements sharing similar programming modalities. In patients with Parkinson's disease, only activations underlying hand movement were observed during the combined task. We interpreted this phenomenon as patients' potential inability to recruit facilitatory activations while performing two movements simultaneously. This lost capacity could be related to a functional prioritization of one movement (i.e., hand movement), in comparison with the other (i.e., speech production). Our observation could also reflect the inability of patients with Parkinson's disease to intrinsically engage the motor coordination necessary to perform a combined task. Copyright © 2011 Movement Disorder Society.

  2. Organization of the human motor system as studied by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Mattay, Venkata S.; Weinberger, Daniel R.

    1999-01-01

    Blood oxygenation level dependent functional magnetic resonance imaging (BOLD fMRI), because of its superior resolution and unlimited repeatability, can be particularly useful in studying functional aspects of the human motor system, especially plasticity, and somatotopic and temporal organization. In this survey, while describing studies that have reliably used BOLD fMRI to examine these aspects of the motor system, we also discuss studies that investigate the neural substrates underlying motor skill acquisition, motor imagery, production of motor sequences; effect of rate and force of movement on brain activation and hemispheric control of motor function. In the clinical realm, in addition to the presurgical evaluation of neurosurgical patients, BOLD fMRI has been used to explore the mechanisms underlying motor abnormalities in patients with neuropsychiatric disorders and the mechanisms underlying reorganization or plasticity of the motor system following a cerebral insult

  3. Script Training Treatment for Adults with Apraxia of Speech

    Science.gov (United States)

    Youmans, Gina; Youmans, Scott R.; Hancock, Adrienne B.

    2011-01-01

    Purpose: Outcomes of script training for individuals with apraxia of speech (AOS) and mild anomic aphasia were investigated. Script training is a functional treatment that has been successful for individuals with aphasia but has not been applied to individuals with AOS. Principles of motor learning were incorporated into training to promote…

  4. Children with 7q11.23 Duplication Syndrome: Speech, Language, Cognitive, and Behavioral Characteristics and their Implications for Intervention

    OpenAIRE

    Velleman, Shelley L.; Mervis, Carolyn B.

    2011-01-01

    7q11.23 duplication syndrome is a recently-documented genetic disorder associated with severe speech delay, language delay, a characteristic facies, hypotonia, developmental delay, and social anxiety. Developmentally appropriate nonverbal pragmatic abilities are demonstrated in socially comfortable situations. Motor speech disorder (Childhood Apraxia of Speech and/or dysarthria), oral apraxia, and/or phonological disorder or symptoms of these disorders are common as are characteristics consis...

  5. Direct conversion of human pluripotent stem cells into cranial motor neurons using a piggyBac vector

    Directory of Open Access Journals (Sweden)

    Riccardo De Santis

    2018-05-01

    Full Text Available Human pluripotent stem cells (PSCs are widely used for in vitro disease modeling. One of the challenges in the field is represented by the ability of converting human PSCs into specific disease-relevant cell types. The nervous system is composed of a wide variety of neuronal types with selective vulnerability in neurodegenerative diseases. This is particularly relevant for motor neuron diseases, in which different motor neurons populations show a different susceptibility to degeneration. Here we developed a fast and efficient method to convert human induced Pluripotent Stem Cells into cranial motor neurons of the branchiomotor and visceral motor subtype. These populations represent the motor neuron subgroup that is primarily affected by a severe form of amyotrophic lateral sclerosis with bulbar onset and worst prognosis. This goal was achieved by stable integration of an inducible vector, based on the piggyBac transposon, allowing controlled activation of Ngn2, Isl1 and Phox2a (NIP. The NIP module effectively produced electrophysiologically active cranial motor neurons. Our method can be easily extended to PSCs carrying disease-associated mutations, thus providing a useful tool to shed light on the cellular and molecular bases of selective motor neuron vulnerability in pathological conditions. Keywords: Spinal motor neuron, Cranial motor neuron, Induced pluripotent stem cells, Amyotrophic lateral sclerosis, Phox2a, piggyBac

  6. Abnormal Brain Dynamics Underlie Speech Production in Children with Autism Spectrum Disorder.

    Science.gov (United States)

    Pang, Elizabeth W; Valica, Tatiana; MacDonald, Matt J; Taylor, Margot J; Brian, Jessica; Lerch, Jason P; Anagnostou, Evdokia

    2016-02-01

    A large proportion of children with autism spectrum disorder (ASD) have speech and/or language difficulties. While a number of structural and functional neuroimaging methods have been used to explore the brain differences in ASD with regards to speech and language comprehension and production, the neurobiology of basic speech function in ASD has not been examined. Magnetoencephalography (MEG) is a neuroimaging modality with high spatial and temporal resolution that can be applied to the examination of brain dynamics underlying speech as it can capture the fast responses fundamental to this function. We acquired MEG from 21 children with high-functioning autism (mean age: 11.43 years) and 21 age- and sex-matched controls as they performed a simple oromotor task, a phoneme production task and a phonemic sequencing task. Results showed significant differences in activation magnitude and peak latencies in primary motor cortex (Brodmann Area 4), motor planning areas (BA 6), temporal sequencing and sensorimotor integration areas (BA 22/13) and executive control areas (BA 9). Our findings of significant functional brain differences between these two groups on these simple oromotor and phonemic tasks suggest that these deficits may be foundational and could underlie the language deficits seen in ASD. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.

  7. Bringing transcranial mapping into shape: Sulcus-aligned mapping captures motor somatotopy in human primary motor hand area

    DEFF Research Database (Denmark)

    Raffin, Estelle; Pellegrino, Giovanni; Di Lazzaro, Vincenzo

    2015-01-01

    Motor representations express some degree of somatotopy in human primary motor hand area (M1HAND), but within-M1HAND corticomotor somatotopy has been difficult to study with transcranial magnetic stimulation (TMS). Here we introduce a “linear” TMS mapping approach based on the individual shape...... of the central sulcus to obtain mediolateral corticomotor excitability profiles of the abductor digiti minimi (ADM) and first dorsal interosseus (FDI) muscles. In thirteen young volunteers, we used stereotactic neuronavigation to stimulate the right M1HAND with a small eight-shaped coil at 120% of FDI resting...

  8. Post-stroke pure apraxia of speech - A rare experience.

    Science.gov (United States)

    Polanowska, Katarzyna Ewa; Pietrzyk-Krawczyk, Iwona

    Apraxia of speech (AOS) is a motor speech disorder, most typically caused by stroke, which in its "pure" form (without other speech-language deficits) is very rare in clinical practice. Because some observable characteristics of AOS overlap with more common verbal communication neurologic syndromes (i.e. aphasia, dysarthria) distinguishing them may be difficult. The present study describes AOS in a 49-year-old right-handed male after left-hemispheric stroke. Analysis of his articulatory and prosodic abnormalities in the context of intact communicative abilities as well as description of symptoms dynamics over time provides valuable information for clinical diagnosis of this specific disorder and prognosis for its recovery. This in turn is the basis for the selection of appropriate rehabilitative interventions. Copyright © 2016 Polish Neurological Society. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  9. Speech, communication and use of augmentative communication in young people with cerebral palsy: the SH&PE population study.

    Science.gov (United States)

    Cockerill, H; Elbourne, D; Allen, E; Scrutton, D; Will, E; McNee, A; Fairhurst, C; Baird, G

    2014-03-01

    Communication is frequently impaired in young people (YP) with bilateral cerebral palsy (CP). Important factors include motoric speech problems (dysarthria) and intellectual disability. Augmentative and Alternative Communication (AAC) techniques are often employed. The aim was to describe the speech problems in bilateral CP, factors associated with speech problems, current AAC provision and use, and to explore the views of both the parent/carer and young person about communication. A total population of children with bilateral CP (n = 346) from four consecutive years of births (1989-1992 inclusive) with onset of CP before 15 months were reassessed at age 16-18 years. Motor skills and speech were directly assessed and both parent/carer and the young person asked about communication and satisfaction with it. Sixty had died, eight had other conditions, 243 consented and speech was assessed in 224 of whom 141 (63%) had impaired speech. Fifty-two (23% of total YP) were mainly intelligible to unfamiliar people, 22 (10%) were mostly unintelligible to unfamiliar people, 67 (30%) were mostly or wholly unintelligible even to familiar adults. However, 89% of parent/carers said that they could communicate 1:1 with their young person. Of the 128 YP who could independently complete the questions, 107 (83.6%) were happy with their communication, nine (7%) neither happy nor unhappy and 12 (9.4%) unhappy. A total of 72 of 224 (32%) were provided with one or more types of AAC but in a significant number (75% of 52 recorded) AAC was not used at home, only in school. Factors associated with speech impairment were severity of physical impairment, as measured by Gross Motor Function Scale level and manipulation in the best hand, intellectual disability and current epilepsy. In a population representative group of YP, aged 16-18 years, with bilateral CP, 63% had impaired speech of varying severity, most had been provided with AAC but few used it at home for communication. © 2013 John

  10. Diagnosis and neurologopedic therapy in a child with sensory-motor alalia

    Directory of Open Access Journals (Sweden)

    Marta Wawrzynów

    2018-01-01

    Full Text Available Introduction: Alalia sensory-motor mechanism is a disorder of understanding speech, words expressing thoughts, auditory perception, shaped on the basis of physical hearing, as well as mechanisms for creating movements and create their accuracy. Alalia is dysfunction, which reveals the source of difficulty for up to 2 years of age. The reason is usually damage to the structure of the cerebral cortex, which may take place during fetal life and perinatal time. Most often alalii sensory-motor are confused with autism spectrum disorders, of both are in fact similar. Objective: The aim of the study was to develop and apply individual therapy neurologopedic alalia a child with sensory-motor and the answer to the question whether such therapy can improve speech perception and the ability of the child. Material and methods: The research method of work is an individual case study. Diagnosis was obtained from intelligence, surveillance, indicative speech testing and research neurologopedic. The result has been supplemented with the child's medical records. Results: Therapy neurologopedic brought the desired results. Results achieved in the field of manual and motor skills and eye-hand coordination. Improved memory and perception of auditory-visual and extended the time attention. Significantly enriched vocabulary. Developed the ability to play, a desire to follow suit. Improved ability to eat independently and function of organs oral-facial area. The patient became me sensitive to stimulus, more stabile, the central muscle tone has been reinforced.

  11. Behavioral and neurobiological correlates of childhood apraxia of speech in Italian children.

    Science.gov (United States)

    Chilosi, Anna Maria; Lorenzini, Irene; Fiori, Simona; Graziosi, Valentina; Rossi, Giuseppe; Pasquariello, Rosa; Cipriani, Paola; Cioni, Giovanni

    2015-11-01

    Childhood apraxia of speech (CAS) is a neurogenic Speech Sound Disorder whose etiology and neurobiological correlates are still unclear. In the present study, 32 Italian children with idiopathic CAS underwent a comprehensive speech and language, genetic and neuroradiological investigation aimed to gather information on the possible behavioral and neurobiological markers of the disorder. The results revealed four main aggregations of behavioral symptoms that indicate a multi-deficit disorder involving both motor-speech and language competence. Six children presented with chromosomal alterations. The familial aggregation rate for speech and language difficulties and the male to female ratio were both very high in the whole sample, supporting the hypothesis that genetic factors make substantial contribution to the risk of CAS. As expected in accordance with the diagnosis of idiopathic CAS, conventional MRI did not reveal macrostructural pathogenic neuroanatomical abnormalities, suggesting that CAS may be due to brain microstructural alterations. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Survey on Chatbot Design Techniques in Speech Conversation Systems

    OpenAIRE

    Sameera A. Abdul-Kader; Dr. John Woods

    2015-01-01

    Human-Computer Speech is gaining momentum as a technique of computer interaction. There has been a recent upsurge in speech based search engines and assistants such as Siri, Google Chrome and Cortana. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to analyse speech, and intelligent responses can be found by designing an engine to provide appropriate human like responses. This type of programme is called a Chatbot, which is the focus of this study. This pap...

  13. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, A.; Moses, H. R.

    2016-01-01

    Currently on the International Space Station (ISS) and other space vehicles Caution & Warning (C&W) alerts are represented with various auditory tones that correspond to the type of event. This system relies on the crew's ability to remember what each tone represents in a high stress, high workload environment when responding to the alert. Furthermore, crew receive a year or more in advance of the mission that makes remembering the semantic meaning of the alerts more difficult. The current system works for missions conducted close to Earth where ground operators can assist as needed. On long duration missions, however, they will need to work off-nominal events autonomously. There is evidence that speech alarms may be easier and faster to recognize, especially during an off-nominal event. The Information Presentation Directed Research Project (FY07-FY09) funded by the Human Research Program included several studies investigating C&W alerts. The studies evaluated tone alerts currently in use with NASA flight deck displays along with candidate speech alerts. A follow-on study used four types of speech alerts to investigate how quickly various types of auditory alerts with and without a speech component - either at the beginning or at the end of the tone - can be identified. Even though crew were familiar with the tone alert from training or direct mission experience, alerts starting with a speech component were identified faster than alerts starting with a tone. The current study replicated the results from the previous study in a more rigorous experimental design to determine if the candidate speech alarms are ready for transition to operations or if more research is needed. Four types of alarms (caution, warning, fire, and depressurization) were presented to participants in both tone and speech formats in laboratory settings and later in the Human Exploration Research Analog (HERA). In the laboratory study, the alerts were presented by software and participants were

  14. Level of action of cathodal DC polarisation induced inhibition of the human motor cortex.

    Science.gov (United States)

    Nitsche, Michael A; Nitsche, Maren S; Klein, Cornelia C; Tergau, Frithjof; Rothwell, John C; Paulus, Walter

    2003-04-01

    To induce prolonged motor cortical excitability reductions by transcranial direct current stimulation in the human. Cathodal direct current stimulation was applied transcranially to the hand area of the human primary motor cortex from 5 to 9 min in separate sessions in twelve healthy subjects. Cortico-spinal excitability was tested by single pulse transcranial magnetic stimulation. Transcranial electrical stimulation and H-reflexes were used to learn about the origin of the excitability changes. Neurone specific enolase was measured before and after the stimulation to prove the safety of the stimulation protocol. Five and 7 min direct current stimulation resulted in motor cortical excitability reductions, which lasted for minutes after the end of stimulation, 9 min stimulation induced after-effects for up to an hour after the end of stimulation, as revealed by transcranial magnetic stimulation. Muscle evoked potentials elicited by transcranial electric stimulation and H-reflexes did not change. Neurone specific enolase concentrations remained stable throughout the experiments. Cathodal transcranial direct current stimulation is capable of inducing prolonged excitability reductions in the human motor cortex non-invasively. These changes are most probably localised intracortically.

  15. Neural networks engaged in short-term memory rehearsal are disrupted by irrelevant speech in human subjects.

    Science.gov (United States)

    Kopp, Franziska; Schröger, Erich; Lipka, Sigrid

    2004-01-02

    Rehearsal mechanisms in human short-term memory are increasingly understood in the light of both behavioural and neuroanatomical findings. However, little is known about the cooperation of participating brain structures and how such cooperations are affected when memory performance is disrupted. In this paper we use EEG coherence as a measure of synchronization to investigate rehearsal processes and their disruption by irrelevant speech in a delayed serial recall paradigm. Fronto-central and fronto-parietal theta (4-7.5 Hz), beta (13-20 Hz), and gamma (35-47 Hz) synchronizations are shown to be involved in our short-term memory task. Moreover, the impairment in serial recall due to irrelevant speech was preceded by a reduction of gamma band coherence. Results suggest that the irrelevant speech effect has its neural basis in the disruption of left-lateralized fronto-central networks. This stresses the importance of gamma band activity for short-term memory operations.

  16. Orangutan call communication and the puzzle of speech evolution

    NARCIS (Netherlands)

    Reis E Lameira, A.

    2013-01-01

    Speech is a human hallmark. However, its evolution is little understood. It remains largely unknown which features of the call communication of our closest relatives – great apes – may have constituted speech evolutionary feedstock. In this study, I investigate the extent to which speech building

  17. Direct Lineage Reprogramming Reveals Disease-Specific Phenotypes of Motor Neurons from Human ALS Patients

    Directory of Open Access Journals (Sweden)

    Meng-Lu Liu

    2016-01-01

    Full Text Available Subtype-specific neurons obtained from adult humans will be critical to modeling neurodegenerative diseases, such as amyotrophic lateral sclerosis (ALS. Here, we show that adult human skin fibroblasts can be directly and efficiently converted into highly pure motor neurons without passing through an induced pluripotent stem cell stage. These adult human induced motor neurons (hiMNs exhibit the cytological and electrophysiological features of spinal motor neurons and form functional neuromuscular junctions (NMJs with skeletal muscles. Importantly, hiMNs converted from ALS patient fibroblasts show disease-specific degeneration manifested through poor survival, soma shrinkage, hypoactivity, and an inability to form NMJs. A chemical screen revealed that the degenerative features of ALS hiMNs can be remarkably rescued by the small molecule kenpaullone. Taken together, our results define a direct and efficient strategy to obtain disease-relevant neuronal subtypes from adult human patients and reveal their promising value in disease modeling and drug identification.

  18. Chronic 'speech catatonia' with constant logorrhea, verbigeration and echolalia successfully treated with lorazepam: a case report.

    Science.gov (United States)

    Lee, Joseph W Y

    2004-12-01

    Logorrhea, verbigeration and echolalia persisted unremittingly for 3 years, with occasional short periods of motoric excitement, in a patient with mild intellectual handicap suffering from chronic schizophrenia. The speech catatonic symptoms, previously refractory to various antipsychotics, responded promptly to lorazepam, a benzodiazepine with documented efficacy in the treatment of acute catatonia but not chronic catatonia. It is suggested that pathways in speech production were selectively involved in the genesis of the chronic speech catatonic syndrome, possibly a rare form of chronic catatonia not previously described.

  19. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  20. Nonverbal oral apraxia in primary progressive aphasia and apraxia of speech.

    Science.gov (United States)

    Botha, Hugo; Duffy, Joseph R; Strand, Edythe A; Machulda, Mary M; Whitwell, Jennifer L; Josephs, Keith A

    2014-05-13

    The goal of this study was to explore the prevalence of nonverbal oral apraxia (NVOA), its association with other forms of apraxia, and associated imaging findings in patients with primary progressive aphasia (PPA) and progressive apraxia of speech (PAOS). Patients with a degenerative speech or language disorder were prospectively recruited and diagnosed with a subtype of PPA or with PAOS. All patients had comprehensive speech and language examinations. Voxel-based morphometry was performed to determine whether atrophy of a specific region correlated with the presence of NVOA. Eighty-nine patients were identified, of which 34 had PAOS, 9 had agrammatic PPA, 41 had logopenic aphasia, and 5 had semantic dementia. NVOA was very common among patients with PAOS but was found in patients with PPA as well. Several patients exhibited only one of NVOA or apraxia of speech. Among patients with apraxia of speech, the severity of the apraxia of speech was predictive of NVOA, whereas ideomotor apraxia severity was predictive of the presence of NVOA in those without apraxia of speech. Bilateral atrophy of the prefrontal cortex anterior to the premotor area and supplementary motor area was associated with NVOA. Apraxia of speech, NVOA, and ideomotor apraxia are at least partially separable disorders. The association of NVOA and apraxia of speech likely results from the proximity of the area reported here and the premotor area, which has been implicated in apraxia of speech. The association of ideomotor apraxia and NVOA among patients without apraxia of speech could represent disruption of modules shared by nonverbal oral movements and limb movements.

  1. Functional MRI (fMRI) on lesions in and around the motor and the eloquent cortices

    International Nuclear Information System (INIS)

    Hara, Yoshie; Nakamura, Mitsugu; Tamura, Shogo; Tamaki, Norihiko; Kitamura, Junji

    1999-01-01

    From the view point of neurosurgeons, to aim the preoperative localized diagnosis on the motor and the eloquent cortices and postoperative preservation of neurological functions, fMRI was carried for patients with lesions in and around the motor and the eloquent cortices. Even in cases of mechanical oppression or brain edema, the motor and the eloquent cortices are localized on cerebral gyri. In perioperative period, identification and preserving the motor and the eloquent cortices are important for keeping brain function. Twenty six preoperative cases and 3 normal healthy subjects were observed. Exercise enhanced fMRI was performed on 3 normal healthy subjects, fMRI with motor stimulation in 24 cases and fMRI with speech stimulation in 4 cases. The signal intensity increased in all cases responsing to both stimulations. But the signal intensity in 8 cases decreased in some regions by motor stimulation and 1 case by speech stimulation. The decrease of signal intensity in this study seems to be a clinically important finding and it will be required to examine the significance in future. (K.H.)

  2. From prosodic structure to acoustic saliency: A fMRI investigation of speech rate, clarity, and emphasis

    Science.gov (United States)

    Golfinopoulos, Elisa

    Acoustic variability in fluent speech can arise at many stages in speech production planning and execution. For example, at the phonological encoding stage, the grouping of phonemes into syllables determines which segments are coarticulated and, by consequence, segment-level acoustic variation. Likewise phonetic encoding, which determines the spatiotemporal extent of articulatory gestures, will affect the acoustic detail of segments. Functional magnetic resonance imaging (fMRI) was used to measure brain activity of fluent adult speakers in four speaking conditions: fast, normal, clear, and emphatic (or stressed) speech. These speech manner changes typically result in acoustic variations that do not change the lexical or semantic identity of productions but do affect the acoustic saliency of phonemes, syllables and/or words. Acoustic responses recorded inside the scanner were assessed quantitatively using eight acoustic measures and sentence duration was used as a covariate of non-interest in the neuroimaging analysis. Compared to normal speech, emphatic speech was characterized acoustically by a greater difference between stressed and unstressed vowels in intensity, duration, and fundamental frequency, and neurally by increased activity in right middle premotor cortex and supplementary motor area, and bilateral primary sensorimotor cortex. These findings are consistent with right-lateralized motor planning of prosodic variation in emphatic speech. Clear speech involved an increase in average vowel and sentence durations and average vowel spacing, along with increased activity in left middle premotor cortex and bilateral primary sensorimotor cortex. These findings are consistent with an increased reliance on feedforward control, resulting in hyper-articulation, under clear as compared to normal speech. Fast speech was characterized acoustically by reduced sentence duration and average vowel spacing, and neurally by increased activity in left anterior frontal

  3. LSVT LOUD and LSVT BIG: Behavioral Treatment Programs for Speech and Body Movement in Parkinson Disease

    Directory of Open Access Journals (Sweden)

    Cynthia Fox

    2012-01-01

    Full Text Available Recent advances in neuroscience have suggested that exercise-based behavioral treatments may improve function and possibly slow progression of motor symptoms in individuals with Parkinson disease (PD. The LSVT (Lee Silverman Voice Treatment Programs for individuals with PD have been developed and researched over the past 20 years beginning with a focus on the speech motor system (LSVT LOUD and more recently have been extended to address limb motor systems (LSVT BIG. The unique aspects of the LSVT Programs include the combination of (a an exclusive target on increasing amplitude (loudness in the speech motor system; bigger movements in the limb motor system, (b a focus on sensory recalibration to help patients recognize that movements with increased amplitude are within normal limits, even if they feel “too loud” or “too big,” and (c training self-cueing and attention to action to facilitate long-term maintenance of treatment outcomes. In addition, the intensive mode of delivery is consistent with principles that drive activity-dependent neuroplasticity and motor learning. The purpose of this paper is to provide an integrative discussion of the LSVT Programs including the rationale for their fundamentals, a summary of efficacy data, and a discussion of limitations and future directions for research.

  4. Probing the corticospinal link between the motor cortex and motoneurones: some neglected aspects of human motor cortical function

    DEFF Research Database (Denmark)

    Petersen, Nicolas Caesar; Butler, Jane E.; Taylor, Janet L.

    2010-01-01

    of the discharge of motor units have revealed that the rapidly conducting corticospinal axons (stimulated at higher intensities) contribute to drive motoneurones in normal voluntary contractions. There are also major non-linearities generated at a spinal level in the relation between corticospinal output...... magnetic stimulation of the human motor cortex have highlighted the capacity of the cortex to modify its apparent excitability in response to altered afferent inputs, training and various pathologies. Studies using cortical stimulation at 'very low' intensities which elicit only short-latency suppression...

  5. Speech emotion recognition methods: A literature review

    Science.gov (United States)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  6. Altered resting-state network connectivity in stroke patients with and without apraxia of speech

    OpenAIRE

    New, Anneliese B.; Robin, Donald A.; Parkinson, Amy L.; Duffy, Joseph R.; McNeil, Malcom R.; Piguet, Olivier; Hornberger, Michael; Price, Cathy J.; Eickhoff, Simon B.; Ballard, Kirrie J.

    2015-01-01

    Motor speech disorders, including apraxia of speech (AOS), account for over 50% of the communication disorders following stroke. Given its prevalence and impact, and the need to understand its neural mechanisms, we used resting state functional MRI to examine functional connectivity within a network of regions previously hypothesized as being associated with AOS (bilateral anterior insula (aINS), inferior frontal gyrus (IFG), and ventral premotor cortex (PM)) in a group of 32 left hemisphere ...

  7. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    Science.gov (United States)

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Speech Clarity Index (Ψ): A Distance-Based Speech Quality Indicator and Recognition Rate Prediction for Dysarthric Speakers with Cerebral Palsy

    Science.gov (United States)

    Kayasith, Prakasith; Theeramunkong, Thanaruk

    It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.

  9. Innovative Speech Reconstructive Surgery

    OpenAIRE

    Hashem Shemshadi

    2003-01-01

    Proper speech functioning in human being, depends on the precise coordination and timing balances in a series of complex neuro nuscular movements and actions. Starting from the prime organ of energy source of expelled air from respirato y system; deliver such air to trigger vocal cords; swift changes of this phonatory episode to a comprehensible sound in RESONACE and final coordination of all head and neck structures to elicit final speech in ...

  10. Centre-surround organization of fast sensorimotor integration in human motor hand area

    DEFF Research Database (Denmark)

    Dubbioso, Raffaele; Raffin, Estelle; Karabanov, Anke

    2017-01-01

    Using the short-latency afferent inhibition (SAI) paradigm, transcranial magnetic stimulation (TMS) of the primary motor hand area (M1HAND) can probe how sensory input from limbs modulates corticomotor output in humans. Here we applied a novel TMS mapping approach to chart the spatial representat......Using the short-latency afferent inhibition (SAI) paradigm, transcranial magnetic stimulation (TMS) of the primary motor hand area (M1HAND) can probe how sensory input from limbs modulates corticomotor output in humans. Here we applied a novel TMS mapping approach to chart the spatial...... in M1HAND. Like homotopic SAI, heterotopic SAF was somatotopically expressed in M1HAND. Together, the results provide first-time evidence that fast sensorimotor integration involves centre-inhibition and surround-facilitation in human M1HAND....

  11. Interaction of Language Processing and Motor Skill in Children with Specific Language Impairment

    Science.gov (United States)

    DiDonato Brumbach, Andrea C.; Goffman, Lisa

    2014-01-01

    Purpose: To examine how language production interacts with speech motor and gross and fine motor skill in children with specific language impairment (SLI). Method: Eleven children with SLI and 12 age-matched peers (4-6 years) produced structurally primed sentences containing particles and prepositions. Utterances were analyzed for errors and for…

  12. Emotion recognition from speech: tools and challenges

    Science.gov (United States)

    Al-Talabani, Abdulbasit; Sellahewa, Harin; Jassim, Sabah A.

    2015-05-01

    Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more "natural" than the acted datasets where the subjects attempt to suppress all but one emotion.

  13. The evolution of primary progressive apraxia of speech.

    Science.gov (United States)

    Josephs, Keith A; Duffy, Joseph R; Strand, Edythe A; Machulda, Mary M; Senjem, Matthew L; Gunter, Jeffrey L; Schwarz, Christopher G; Reid, Robert I; Spychalla, Anthony J; Lowe, Val J; Jack, Clifford R; Whitwell, Jennifer L

    2014-10-01

    , compared to controls. Increased rates of brain atrophy over time were observed throughout the premotor cortex, as well as prefrontal cortex, motor cortex, basal ganglia and midbrain, while white matter tract degeneration spread into the splenium of the corpus callosum and motor cortex white matter. Hypometabolism progressed over time in almost all subjects. These findings demonstrate that some subjects with primary progressive apraxia of speech will rapidly evolve and develop a devastating progressive supranuclear palsy-like syndrome ∼ 5 years after onset, perhaps related to progressive involvement of neocortex, basal ganglia and midbrain. These findings help improve our understanding of primary progressive apraxia of speech and provide some important prognostic guidelines. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Source Separation via Spectral Masking for Speech Recognition Systems

    Directory of Open Access Journals (Sweden)

    Gustavo Fernandes Rodrigues

    2012-12-01

    Full Text Available In this paper we present an insight into the use of spectral masking techniques in time-frequency domain, as a preprocessing step for the speech signal recognition. Speech recognition systems have their performance negatively affected in noisy environments or in the presence of other speech signals. The limits of these masking techniques for different levels of the signal-to-noise ratio are discussed. We show the robustness of the spectral masking techniques against four types of noise: white, pink, brown and human speech noise (bubble noise. The main contribution of this work is to analyze the performance limits of recognition systems  using spectral masking. We obtain an increase of 18% on the speech hit rate, when the speech signals were corrupted by other speech signals or bubble noise, with different signal-to-noise ratio of approximately 1, 10 and 20 dB. On the other hand, applying the ideal binary masks to mixtures corrupted by white, pink and brown noise, results an average growth of 9% on the speech hit rate, with the same different signal-to-noise ratio. The experimental results suggest that the masking spectral techniques are more suitable for the case when it is applied a bubble noise, which is produced by human speech, than for the case of applying white, pink and brown noise.

  15. Transcranial static magnetic field stimulation of the human motor cortex

    Science.gov (United States)

    Oliviero, Antonio; Mordillo-Mateos, Laura; Arias, Pablo; Panyavin, Ivan; Foffani, Guglielmo; Aguilar, Juan

    2011-01-01

    Abstract The aim of the present study was to investigate in healthy humans the possibility of a non-invasive modulation of motor cortex excitability by the application of static magnetic fields through the scalp. Static magnetic fields were obtained by using cylindrical NdFeB magnets. We performed four sets of experiments. In Experiment 1, we recorded motor potentials evoked by single-pulse transcranial magnetic stimulation (TMS) of the motor cortex before and after 10 min of transcranial static magnetic field stimulation (tSMS) in conscious subjects. We observed an average reduction of motor cortex excitability of up to 25%, as revealed by TMS, which lasted for several minutes after the end of tSMS, and was dose dependent (intensity of the magnetic field) but not polarity dependent. In Experiment 2, we confirmed the reduction of motor cortex excitability induced by tSMS using a double-blind sham-controlled design. In Experiment 3, we investigated the duration of tSMS that was necessary to modulate motor cortex excitability. We found that 10 min of tSMS (compared to 1 min and 5 min) were necessary to induce significant effects. In Experiment 4, we used transcranial electric stimulation (TES) to establish that the tSMS-induced reduction of motor cortex excitability was not due to corticospinal axon and/or spinal excitability, but specifically involved intracortical networks. These results suggest that tSMS using small static magnets may be a promising tool to modulate cerebral excitability in a non-invasive, painless, and reversible way. PMID:21807616

  16. Pseudobulbar dysarthria in the initial stage of motor neuron disease with dementia: a clinicopathological report of two autopsied cases.

    Science.gov (United States)

    Ishihara, Kenji; Araki, Shigeo; Ihori, Nami; Suzuki, Yoshio; Shiota, Jun-ichi; Arai, Nobutaka; Nakano, Imaharu; Kawamura, Mitsuru

    2013-01-01

    We retrospectively analyzed the clinical features of two cases of neurodegenerative disease, whose initial symptoms were motor speech disorder and dementia, brought to autopsy. We compared the distributions of pathological findings with the clinical features. The main symptom of speech disorder was dysarthria, involving low pitch, slow rate, hypernasality and hoarseness. Other than these findings, effortful speech, sound prolongation and initial difficulty were observed. Moreover, repetition of multisyllables was severely impaired compared to monosyllables. Repetition and comprehension of words and sentences were not impaired. Neither atrophy nor fasciculation of the tongue was observed. Both cases showed rapid progression to mutism within a few years. Neuropathologically, frontal lobe degeneration including the precentral gyrus was observed. The bilateral pyramidal tracts also showed severe degeneration. However, the nucleus of the hypoglossal nerve showed only mild degeneration. These findings suggest upper motor neuron dominant motor neuron disease with dementia. We believe the results indicate a subgroup of motor neuron disease with dementia whose initial symptoms involve pseudobulbar palsy and dementia, and which shows rapid progression to mutism. Copyright © 2013 S. Karger AG, Basel.

  17. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  18. Apraxia of Speech and Phonological Errors in the Diagnosis of Nonfluent/Agrammatic and Logopenic Variants of Primary Progressive Aphasia

    Science.gov (United States)

    Croot, Karen; Ballard, Kirrie; Leyton, Cristian E.; Hodges, John R.

    2012-01-01

    Purpose: The International Consensus Criteria for the diagnosis of primary progressive aphasia (PPA; Gorno-Tempini et al., 2011) propose apraxia of speech (AOS) as 1 of 2 core features of nonfluent/agrammatic PPA and propose phonological errors or absence of motor speech disorder as features of logopenic PPA. We investigated the sensitivity and…

  19. Social interaction enhances motor resonance for observed human actions.

    Science.gov (United States)

    Hogeveen, Jeremy; Obhi, Sukhvinder S

    2012-04-25

    Understanding the neural basis of social behavior has become an important goal for cognitive neuroscience and a key aim is to link neural processes observed in the laboratory to more naturalistic social behaviors in real-world contexts. Although it is accepted that mirror mechanisms contribute to the occurrence of motor resonance (MR) and are common to action execution, observation, and imitation, questions remain about mirror (and MR) involvement in real social behavior and in processing nonhuman actions. To determine whether social interaction primes the MR system, groups of participants engaged or did not engage in a social interaction before observing human or robotic actions. During observation, MR was assessed via motor-evoked potentials elicited with transcranial magnetic stimulation. Compared with participants who did not engage in a prior social interaction, participants who engaged in the social interaction showed a significant increase in MR for human actions. In contrast, social interaction did not increase MR for robot actions. Thus, naturalistic social interaction and laboratory action observation tasks appear to involve common MR mechanisms, and recent experience tunes the system to particular agent types.

  20. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  1. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  2. IEP goals for school-age children with speech sound disorders.

    Science.gov (United States)

    Farquharson, Kelly; Tambyraja, Sherine R; Justice, Laura M; Redle, Erin E

    2014-01-01

    The purpose of the current study was to describe the current state of practice for writing Individualized Education Program (IEP) goals for children with speech sound disorders (SSDs). IEP goals for 146 children receiving services for SSDs within public school systems across two states were coded for their dominant theoretical framework and overall quality. A dichotomous scheme was used for theoretical framework coding: cognitive-linguistic or sensory-motor. Goal quality was determined by examining 7 specific indicators outlined by an empirically tested rating tool. In total, 147 long-term and 490 short-term goals were coded. The results revealed no dominant theoretical framework for long-term goals, whereas short-term goals largely reflected a sensory-motor framework. In terms of quality, the majority of speech production goals were functional and generalizable in nature, but were not able to be easily targeted during common daily tasks or by other members of the IEP team. Short-term goals were consistently rated higher in quality domains when compared to long-term goals. The current state of practice for writing IEP goals for children with SSDs indicates that theoretical framework may be eclectic in nature and likely written to support the individual needs of children with speech sound disorders. Further investigation is warranted to determine the relations between goal quality and child outcomes. (1) Identify two predominant theoretical frameworks and discuss how they apply to IEP goal writing. (2) Discuss quality indicators as they relate to IEP goals for children with speech sound disorders. (3) Discuss the relationship between long-term goals level of quality and related theoretical frameworks. (4) Identify the areas in which business-as-usual IEP goals exhibit strong quality.

  3. On the nature and evolution of the neural bases of human language

    Science.gov (United States)

    Lieberman, Philip

    2002-01-01

    The traditional theory equating the brain bases of language with Broca's and Wernicke's neocortical areas is wrong. Neural circuits linking activity in anatomically segregated populations of neurons in subcortical structures and the neocortex throughout the human brain regulate complex behaviors such as walking, talking, and comprehending the meaning of sentences. When we hear or read a word, neural structures involved in the perception or real-world associations of the word are activated as well as posterior cortical regions adjacent to Wernicke's area. Many areas of the neocortex and subcortical structures support the cortical-striatal-cortical circuits that confer complex syntactic ability, speech production, and a large vocabulary. However, many of these structures also form part of the neural circuits regulating other aspects of behavior. For example, the basal ganglia, which regulate motor control, are also crucial elements in the circuits that confer human linguistic ability and abstract reasoning. The cerebellum, traditionally associated with motor control, is active in motor learning. The basal ganglia are also key elements in reward-based learning. Data from studies of Broca's aphasia, Parkinson's disease, hypoxia, focal brain damage, and a genetically transmitted brain anomaly (the putative "language gene," family KE), and from comparative studies of the brains and behavior of other species, demonstrate that the basal ganglia sequence the discrete elements that constitute a complete motor act, syntactic process, or thought process. Imaging studies of intact human subjects and electrophysiologic and tracer studies of the brains and behavior of other species confirm these findings. As Dobzansky put it, "Nothing in biology makes sense except in the light of evolution" (cited in Mayr, 1982). That applies with as much force to the human brain and the neural bases of language as it does to the human foot or jaw. The converse follows: the mark of evolution on

  4. Relations between segmental and motor variability in prosodically complex nonword sequences.

    Science.gov (United States)

    Goffman, Lisa; Gerken, Louann; Lucchesi, Julie

    2007-04-01

    To assess how prosodic prominence and hierarchical foot structure influence segmental and articulatory aspects of speech production, specifically segmental accuracy and variability, and oral movement trajectory variability. Thirty individuals participated: 10 young adults, 10 children who are normally developing, and 10 children diagnosed with specific language impairment. Segmental error and segmental variability and movement trajectory variability were compared in low and high prosodic prominence conditions (i.e., strong and weak syllables) and in different prosodic foot structures. Between-participants findings were that both groups of children showed more segmental error and segmental variability and more movement trajectory variability than did adults. A similar within-participant pattern of results was observed for all 3 groups. Prosodic prominence influenced both segmental and motor levels of analysis, with weak syllables produced less accurately and with more lip and jaw movement trajectory variability than strong syllables. However, hierarchical foot structure affected segmental but not motor measures of speech production accuracy and variability. Motor and segmental variables were not consistently aligned. This pattern of results has clinical implications because inferences about motor variability may not directly follow from observations of segmental variability.

  5. A Nationwide Survey of Nonspeech Oral Motor Exercise Use: Implications for Evidence-Based Practice

    Science.gov (United States)

    Lof, Gregory L.; Watson, Maggie M.

    2008-01-01

    Purpose: A nationwide survey was conducted to determine if speech-language pathologists (SLPs) use nonspeech oral motor exercises (NSOMEs) to address children's speech sound problems. For those SLPs who used NSOMEs, the survey also identified (a) the types of NSOMEs used by the SLPs, (b) the SLPs' underlying beliefs about why they use NSOMEs, (c)…

  6. Distributed Speech Enhancement in Wireless Acoustic Sensor Networks

    NARCIS (Netherlands)

    Zeng, Y.

    2015-01-01

    In digital speech communication applications like hands-free mobile telephony, hearing aids and human-to-computer communication systems, the recorded speech signals are typically corrupted by background noise. As a result, their quality and intelligibility can get severely degraded. Traditional

  7. Electronic Control System Of Home Appliances Using Speech Command Words

    Directory of Open Access Journals (Sweden)

    Aye Min Soe

    2015-06-01

    Full Text Available Abstract The main idea of this paper is to develop a speech recognition system. By using this system smart home appliances are controlled by spoken words. The spoken words chosen for recognition are Fan On Fan Off Light On Light Off TV On and TV Off. The input of the system takes speech signals to control home appliances. The proposed system has two main parts speech recognition and smart home appliances electronic control system. Speech recognition is implemented in MATLAB environment. In this process it contains two main modules feature extraction and feature matching. Mel Frequency Cepstral Coefficients MFCC is used for feature extraction. Vector Quantization VQ approach using clustering algorithm is applied for feature matching. In electrical home appliances control system RF module is used to carry command signal from PC to microcontroller wirelessly. Microcontroller is connected to driver circuit for relay and motor. The input commands are recognized very well. The system is a good performance to control home appliances by spoken words.

  8. Inconsistency of speech in children with childhood apraxia of speech, phonological disorders, and typical speech

    Science.gov (United States)

    Iuzzini, Jenya

    There is a lack of agreement on the features used to differentiate Childhood Apraxia of Speech (CAS) from Phonological Disorders (PD). One criterion which has gained consensus is lexical inconsistency of speech (ASHA, 2007); however, no accepted measure of this feature has been defined. Although lexical assessment provides information about consistency of an item across repeated trials, it may not capture the magnitude of inconsistency within an item. In contrast, segmental analysis provides more extensive information about consistency of phoneme usage across multiple contexts and word-positions. The current research compared segmental and lexical inconsistency metrics in preschool-aged children with PD, CAS, and typical development (TD) to determine how inconsistency varies with age in typical and disordered speakers, and whether CAS and PD were differentiated equally well by both assessment levels. Whereas lexical and segmental analyses may be influenced by listener characteristics or speaker intelligibility, the acoustic signal is less vulnerable to these factors. In addition, the acoustic signal may reveal information which is not evident in the perceptual signal. A second focus of the current research was motivated by Blumstein et al.'s (1980) classic study on voice onset time (VOT) in adults with acquired apraxia of speech (AOS) which demonstrated a motor impairment underlying AOS. In the current study, VOT analyses were conducted to determine the relationship between age and group with the voicing distribution for bilabial and alveolar plosives. Findings revealed that 3-year-olds evidenced significantly higher inconsistency than 5-year-olds; segmental inconsistency approached 0% in 5-year-olds with TD, whereas it persisted in children with PD and CAS suggesting that for child in this age-range, inconsistency is a feature of speech disorder rather than typical development (Holm et al., 2007). Likewise, whereas segmental and lexical inconsistency were

  9. Status report on speech research. A report on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications

    Science.gov (United States)

    Liberman, A. M.

    1985-10-01

    This interim status report on speech research discusses the following topics: On Vagueness and Fictions as Cornerstones of a Theory of Perceiving and Acting: A Comment on Walter (1983); The Informational Support for Upright Stance; Determining the Extent of Coarticulation-effects of Experimental Design; The Roles of Phoneme Frequency, Similarity, and Availability in the Experimental Elicitation of Speech Errors; On Learning to Speak; The Motor Theory of Speech Perception Revised; Linguistic and Acoustic Correlates of the Perceptual Structure Found in an Individual Differences Scaling Study of Vowels; Perceptual Coherence of Speech: Stability of Silence-cued Stop Consonants; Development of the Speech Perceptuomotor System; Dependence of Reading on Orthography-Investigations in Serbo-Croatian; The Relationship between Knowledge of Derivational Morphology and Spelling Ability in Fourth, Sixth, and Eighth Graders; Relations among Regular and Irregular, Morphologically-Related Words in the Lexicon as Revealed by Repetition Priming; Grammatical Priming of Inflected Nouns by the Gender of Possessive Adjectives; Grammatical Priming of Inflected Nouns by Inflected Adjectives; Deaf Signers and Serial Recall in the Visual Modality-Memory for Signs, Fingerspelling, and Print; Did Orthographies Evolve?; The Development of Children's Sensitivity to Factors Inf luencing Vowel Reading.

  10. Recapitulation of spinal motor neuron-specific disease phenotypes in a human cell model of spinal muscular atrophy

    Institute of Scientific and Technical Information of China (English)

    Zhi-Bo Wang; Xiaoqing Zhang; Xue-Jun Li

    2013-01-01

    Establishing human cell models of spinal muscular atrophy (SMA) to mimic motor neuron-specific phenotypes holds the key to understanding the pathogenesis of this devastating disease.Here,we developed a closely representative cell model of SMA by knocking down the disease-determining gene,survival motor neuron (SMN),in human embryonic stem cells (hESCs).Our study with this cell model demonstrated that knocking down of SMN does not interfere with neural induction or the initial specification of spinal motor neurons.Notably,the axonal outgrowth of spinal motor neurons was significantly impaired and these disease-mimicking neurons subsequently degenerated.Furthermore,these disease phenotypes were caused by SMN-full length (SMN-FL) but not SMN-A7 (lacking exon 7)knockdown,and were specific to spinal motor neurons.Restoring the expression of SMN-FL completely ameliorated all of the disease phenotypes,including specific axonal defects and motor neuron loss.Finally,knockdown of SMNFL led to excessive mitochondrial oxidative stress in human motor neuron progenitors.The involvement of oxidative stress in the degeneration of spinal motor neurons in the SMA cell model was further confirmed by the administration of N-acetylcysteine,a potent antioxidant,which prevented disease-related apoptosis and subsequent motor neuron death.Thus,we report here the successful establishment of an hESC-based SMA model,which exhibits disease gene isoform specificity,cell type specificity,and phenotype reversibility.Our model provides a unique paradigm for studying how motor neurons specifically degenerate and highlights the potential importance of antioxidants for the treatment of SMA.

  11. Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.

    Science.gov (United States)

    Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc

    2017-09-01

    Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.

  12. The functional anatomy of speech perception: Dorsal and ventral processing pathways

    Science.gov (United States)

    Hickok, Gregory

    2003-04-01

    Drawing on recent developments in the cortical organization of vision, and on data from a variety of sources, Hickok and Poeppel (2000) have proposed a new model of the functional anatomy of speech perception. The model posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, involved in mapping sound onto meaning, and a dorsal stream, involved in mapping sound onto articulatory-based representations. The ventral stream projects ventrolaterally toward inferior posterior temporal cortex which serves as an interface between sound and meaning. The dorsal stream projects dorsoposteriorly toward the parietal lobe and ultimately to frontal regions. This network provides a mechanism for the development and maintenance of ``parity'' between auditory and motor representations of speech. Although the dorsal stream represents a tight connection between speech perception and speech production, it is not a critical component of the speech perception process under ecologically natural listening conditions. Some degree of bi-directionality in both the dorsal and ventral pathways is also proposed. A variety of recent empirical tests of this model have provided further support for the proposal.

  13. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  14. THE ROLE OF THE SPEECH THERAPIST AND HIS INFLUENCE IN SPEECH DEVELOPMENT OF CHILDREN WITH CENTRAL DEFECTS AND INSTRUCTIVE AND ADVISORY WORK OF THE PARENT

    Directory of Open Access Journals (Sweden)

    Violeta TORTEVSKA

    1997-06-01

    Full Text Available The modern way of living in which the communication becomes a basic and upbringing factor and regulator of the relations isolates children with hard individual, family, educative and social problems.The speech and language disorders are the most remarkable symptoms pointing out the complex of defects in the communicative activities, reduced cognitive functions and cerebral dysfunction's.The modern conception in the rehabilitation field leads to a full engagement of the children’s closest environment and especially parents.The study will include the work of the speech therapist with children with a diagnosis tardy speech development (alalia and developing dysphasia in the hearing, speech and voice rehabilitation institute-Skopje, and its role introducing the parents for their right access and the systematic conduction of the rehabilitation proceedings-especially stimulating the motors and speech development.The speech therapist’s task is to find out a way and to apply means by which the children with central damages could build their speech and lingual system and to help the parents through instructive and advisory work into the comprehension of the phases and stages of that system.The conclusion is that the proceedings of the early treatment with the children with central damages are naturally caused by the difference of their early supplementation. The suggestions that are referring to what should be substituted, how much it should be substituted and how it should be done leads to the frames of the early therapeutical access.

  15. Recognizing Stress Using Semantics and Modulation of Speech and Gestures

    NARCIS (Netherlands)

    Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J.M.

    2016-01-01

    This paper investigates how speech and gestures convey stress, and how they can be used for automatic stress recognition. As a first step, we look into how humans use speech and gestures to convey stress. In particular, for both speech and gestures, we distinguish between stress conveyed by the

  16. Detection of target phonemes in spontaneous and read speech.

    Science.gov (United States)

    Mehta, G; Cutler, A

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalise to the recognition of spontaneous speech. In the present study listeners were presented with both spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Responses were, overall, equally fast in each speech mode. However, analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than in unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claims from previous work that listeners pay great attention to prosodic information in the process of recognising speech.

  17. Peripheral facial palsy: Speech, communication and oral motor function.

    Science.gov (United States)

    Movérare, T; Lohmander, A; Hultcrantz, M; Sjögreen, L

    2017-02-01

    The aim of the present study was to examine the effect of acquired unilateral peripheral facial palsy on speech, communication and oral functions and to study the relationship between the degree of facial palsy and articulation, saliva control, eating ability and lip force. In this descriptive study, 27 patients (15 men and 12 women, mean age 48years) with unilateral peripheral facial palsy were included if they were graded under 70 on the Sunnybrook Facial Grading System. The assessment was carried out in connection with customary visits to the ENT Clinic and comprised lip force, articulation and intelligibility, together with perceived ability to communicate and ability to eat and control saliva conducted through self-response questionnaires. The patients with unilateral facial palsy had significantly lower lip force, poorer articulation and ability to eat and control saliva compared with reference data in healthy populations. The degree of facial palsy correlated significantly with lip force but not with articulation, intelligibility, perceived communication ability or reported ability to eat and control saliva. Acquired peripheral facial palsy may affect communication and the ability to eat and control saliva. Physicians should be aware that there is no direct correlation between the degree of facial palsy and the possible effect on communication, eating ability and saliva control. Physicians are therefore recommended to ask specific questions relating to problems with these functions during customary medical visits and offer possible intervention by a speech-language pathologist or a physiotherapist. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  18. Cortical activity patterns predict robust speech discrimination ability in noise

    Science.gov (United States)

    Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.

    2012-01-01

    The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331

  19. The effects of Thalamic Deep Brain Stimulation on speech dynamics in patients with Essential Tremor: An articulographic study.

    Directory of Open Access Journals (Sweden)

    Doris Mücke

    Full Text Available Acoustic studies have revealed that patients with Essential Tremor treated with thalamic Deep Brain Stimulation (DBS may suffer from speech deterioration in terms of imprecise oral articulation and reduced voicing control. Based on the acoustic signal one cannot infer, however, whether this deterioration is due to a general slowing down of the speech motor system (e.g., a target undershoot of a desired articulatory goal resulting from being too slow or disturbed coordination (e.g., a target undershoot caused by problems with the relative phasing of articulatory movements. To elucidate this issue further, we here investigated both acoustics and articulatory patterns of the labial and lingual system using Electromagnetic Articulography (EMA in twelve Essential Tremor patients treated with thalamic DBS and twelve age- and sex-matched controls. By comparing patients with activated (DBS-ON and inactivated stimulation (DBS-OFF with control speakers, we show that critical changes in speech dynamics occur on two levels: With inactivated stimulation (DBS-OFF, patients showed coordination problems of the labial and lingual system in terms of articulatory imprecision and slowness. These effects of articulatory discoordination worsened under activated stimulation, accompanied by an additional overall slowing down of the speech motor system. This leads to a poor performance of syllables on the acoustic surface, reflecting an aggravation either of pre-existing cerebellar deficits and/or the affection of the upper motor fibers of the internal capsule.

  20. Auditory cortex processes variation in our own speech.

    Directory of Open Access Journals (Sweden)

    Kevin R Sitek

    Full Text Available As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs from ninety-nine subjects while they uttered "ah" and while they listened to those speech sounds played back. Subjects' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance's difference from the median formant values did not affect N1. Trial-to-trial pitch (f0 deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production.

  1. Auditory Cortex Processes Variation in Our Own Speech

    Science.gov (United States)

    Sitek, Kevin R.; Mathalon, Daniel H.; Roach, Brian J.; Houde, John F.; Niziolek, Caroline A.; Ford, Judith M.

    2013-01-01

    As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance's difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production. PMID:24349399

  2. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  3. Inner Speech: Development, Cognitive Functions, Phenomenology, and Neurobiology

    Science.gov (United States)

    2015-01-01

    Inner speech—also known as covert speech or verbal thinking—has been implicated in theories of cognitive development, speech monitoring, executive function, and psychopathology. Despite a growing body of knowledge on its phenomenology, development, and function, approaches to the scientific study of inner speech have remained diffuse and largely unintegrated. This review examines prominent theoretical approaches to inner speech and methodological challenges in its study, before reviewing current evidence on inner speech in children and adults from both typical and atypical populations. We conclude by considering prospects for an integrated cognitive science of inner speech, and present a multicomponent model of the phenomenon informed by developmental, cognitive, and psycholinguistic considerations. Despite its variability among individuals and across the life span, inner speech appears to perform significant functions in human cognition, which in some cases reflect its developmental origins and its sharing of resources with other cognitive processes. PMID:26011789

  4. Abnormal laughter-like vocalisations replacing speech in primary progressive aphasia

    Science.gov (United States)

    Rohrer, Jonathan D.; Warren, Jason D.; Rossor, Martin N.

    2009-01-01

    We describe ten patients with a clinical diagnosis of primary progressive aphasia (PPA) (pathologically confirmed in three cases) who developed abnormal laughter-like vocalisations in the context of progressive speech output impairment leading to mutism. Failure of speech output was accompanied by increasing frequency of the abnormal vocalisations until ultimately they constituted the patient's only extended utterance. The laughter-like vocalisations did not show contextual sensitivity but occurred as an automatic vocal output that replaced speech. Acoustic analysis of the vocalisations in two patients revealed abnormal motor features including variable note duration and inter-note interval, loss of temporal symmetry of laugh notes and loss of the normal decrescendo. Abnormal laughter-like vocalisations may be a hallmark of a subgroup in the PPA spectrum with impaired control and production of nonverbal vocal behaviour due to disruption of fronto-temporal networks mediating vocalisation. PMID:19435636

  5. Performance Assessment of Dynaspeak Speech Recognition System on Inflight Databases

    National Research Council Canada - National Science Library

    Barry, Timothy

    2004-01-01

    .... To aid in the assessment of various commercially available speech recognition systems, several aircraft speech databases have been developed at the Air Force Research Laboratory's Human Effectiveness Directorate...

  6. Logopenic and nonfluent variants of primary progressive aphasia are differentiated by acoustic measures of speech production.

    Directory of Open Access Journals (Sweden)

    Kirrie J Ballard

    Full Text Available Differentiation of logopenic (lvPPA and nonfluent/agrammatic (nfvPPA variants of Primary Progressive Aphasia is important yet remains challenging since it hinges on expert based evaluation of speech and language production. In this study acoustic measures of speech in conjunction with voxel-based morphometry were used to determine the success of the measures as an adjunct to diagnosis and to explore the neural basis of apraxia of speech in nfvPPA. Forty-one patients (21 lvPPA, 20 nfvPPA were recruited from a consecutive sample with suspected frontotemporal dementia. Patients were diagnosed using the current gold-standard of expert perceptual judgment, based on presence/absence of particular speech features during speaking tasks. Seventeen healthy age-matched adults served as controls. MRI scans were available for 11 control and 37 PPA cases; 23 of the PPA cases underwent amyloid ligand PET imaging. Measures, corresponding to perceptual features of apraxia of speech, were periods of silence during reading and relative vowel duration and intensity in polysyllable word repetition. Discriminant function analyses revealed that a measure of relative vowel duration differentiated nfvPPA cases from both control and lvPPA cases (r(2 = 0.47 with 88% agreement with expert judgment of presence of apraxia of speech in nfvPPA cases. VBM analysis showed that relative vowel duration covaried with grey matter intensity in areas critical for speech motor planning and programming: precentral gyrus, supplementary motor area and inferior frontal gyrus bilaterally, only affected in the nfvPPA group. This bilateral involvement of frontal speech networks in nfvPPA potentially affects access to compensatory mechanisms involving right hemisphere homologues. Measures of silences during reading also discriminated the PPA and control groups, but did not increase predictive accuracy. Findings suggest that a measure of relative vowel duration from of a polysyllable word

  7. Articulatory mediation of speech perception: a causal analysis of multi-modal imaging data.

    Science.gov (United States)

    Gow, David W; Segawa, Jennifer A

    2009-02-01

    The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analyzes of high spatiotemporal resolution neural activation data derived from the integration of magnetic resonance imaging, magnetoencephalography and electroencephalography, to examine the role of lexical and articulatory mediation in listeners' ability to use phonetic context to compensate for place assimilation. Listeners heard two-word phrases such as pen pad and then saw two pictures, from which they had to select the one that depicted the phrase. Assimilation, lexical competitor environment and the phonological validity of assimilation context were all manipulated. Behavioral data showed an effect of context on the interpretation of assimilated segments. Analysis of 40 Hz gamma phase locking patterns identified a large distributed neural network including 16 distinct regions of interest (ROIs) spanning portions of both hemispheres in the first 200 ms of post-assimilation context. Granger analyzes of individual conditions showed differing patterns of causal interaction between ROIs during this interval, with hypothesized lexical and articulatory structures and pathways driving phonetic activation in the posterior superior temporal gyrus in assimilation conditions, but not in phonetically unambiguous conditions. These results lend strong support for the motor theory of speech perception, and clarify the role of lexical mediation in the phonetic processing of assimilated speech.

  8. Non-right handed primary progressive apraxia of speech.

    Science.gov (United States)

    Botha, Hugo; Duffy, Joseph R; Whitwell, Jennifer L; Strand, Edythe A; Machulda, Mary M; Spychalla, Anthony J; Tosakulwong, Nirubol; Senjem, Matthew L; Knopman, David S; Petersen, Ronald C; Jack, Clifford R; Lowe, Val J; Josephs, Keith A

    2018-07-15

    In recent years a large and growing body of research has greatly advanced our understanding of primary progressive apraxia of speech. Handedness has emerged as one potential marker of selective vulnerability in degenerative diseases. This study evaluated the clinical and imaging findings in non-right handed compared to right handed participants in a prospective cohort diagnosed with primary progressive apraxia of speech. A total of 30 participants were included. Compared to the expected rate in the population, there was a higher prevalence of non-right handedness among those with primary progressive apraxia of speech (6/30, 20%). Small group numbers meant that these results did not reach statistical significance, although the effect sizes were moderate-to-large. There were no clinical differences between right handed and non-right handed participants. Bilateral hypometabolism was seen in primary progressive apraxia of speech compared to controls, with non-right handed participants showing more right hemispheric involvement. This is the first report of a higher rate of non-right handedness in participants with isolated apraxia of speech, which may point to an increased vulnerability for developing this disorder among non-right handed participants. This challenges prior hypotheses about a relative protective effect of non-right handedness for tau-related neurodegeneration. We discuss potential avenues for future research to investigate the relationship between handedness and motor disorders more generally. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  10. Paving the Way for Speech: Voice-Training-Induced Plasticity in Chronic Aphasia and Apraxia of Speech—Three Single Cases

    Directory of Open Access Journals (Sweden)

    Monika Jungblut

    2014-01-01

    Full Text Available Difficulties with temporal coordination or sequencing of speech movements are frequently reported in aphasia patients with concomitant apraxia of speech (AOS. Our major objective was to investigate the effects of specific rhythmic-melodic voice training on brain activation of those patients. Three patients with severe chronic nonfluent aphasia and AOS were included in this study. Before and after therapy, patients underwent the same fMRI procedure as 30 healthy control subjects in our prestudy, which investigated the neural substrates of sung vowel changes in untrained rhythm sequences. A main finding was that post-minus pretreatment imaging data yielded significant perilesional activations in all patients for example, in the left superior temporal gyrus, whereas the reverse subtraction revealed either no significant activation or right hemisphere activation. Likewise, pre- and posttreatment assessments of patients’ vocal rhythm production, language, and speech motor performance yielded significant improvements for all patients. Our results suggest that changes in brain activation due to the applied training might indicate specific processes of reorganization, for example, improved temporal sequencing of sublexical speech components. In this context, a training that focuses on rhythmic singing with differently demanding complexity levels as concerns motor and cognitive capabilities seems to support paving the way for speech.

  11. A virtual trainer concept for robot-assisted human motor learning in rowing

    Directory of Open Access Journals (Sweden)

    Baumgartner L.

    2011-12-01

    Full Text Available Keeping the attention level and observing multiple physiological and biomechanical variables at the same time at high precision is very challenging for human trainers. Concurrent augmented feedback, which is suggested to enhance motor learning in complex motor tasks, can also hardly be provided by a human trainer. Thus, in this paper, a concept for a virtual trainer is presented that may overcome the limits of a human trainer. The intended virtual trainer will be implemented in a CAVE providing auditory, visual and haptic cues. As a first application, the virtual trainer will be used in a realistic scenario for sweep rowing. To provide individual feedback to each rower, the virtual trainer quantifies errors and provides concurrent auditory, visual, and haptic feedback. The concurrent feedback will be adapted according to the actual performance, individual maximal rowing velocity, and the athlete’s individual perception.

  12. Human rights or security? Positions on asylum in European Parliament speeches

    DEFF Research Database (Denmark)

    Frid-Nielsen, Snorre Sylvester

    2018-01-01

    This study examines speeches in the European Parliament relating to asylum. Conceptually, it tests hypotheses concerning the relation between national parties and Members of European Parliament (MEPs). The computer-based content analysis method Wordfish is used to examine 876 speeches from 2004-2...

  13. Respiration-related discharge of hyoglossus muscle motor units in the rat.

    Science.gov (United States)

    Powell, Gregory L; Rice, Amber; Bennett-Cross, Seres J; Fregosi, Ralph F

    2014-01-01

    Although respiratory muscle motor units have been studied during natural breathing, simultaneous measures of muscle force have never been obtained. Tongue retractor muscles, such as the hyoglossus (HG), play an important role in swallowing, licking, chewing, breathing, and, in humans, speech. The HG is phasically recruited during the inspiratory phase of the respiratory cycle. Moreover, in urethane anesthetized rats the drive to the HG waxes and wanes spontaneously, providing a unique opportunity to study motor unit firing patterns as the muscle is driven naturally by the central pattern generator for breathing. We recorded tongue retraction force, the whole HG muscle EMG and the activity of 38 HG motor units in spontaneously breathing anesthetized rats under low-force and high-force conditions. Activity in all cases was confined to the inspiratory phase of the respiratory cycle. Changes in the EMG were correlated significantly with corresponding changes in force, with the change in EMG able to predict 53-68% of the force variation. Mean and peak motor unit firing rates were greater under high-force conditions, although the magnitude of discharge rate modulation varied widely across the population. Changes in mean and peak firing rates were significantly correlated with the corresponding changes in force, but the correlations were weak (r(2) = 0.27 and 0.25, respectively). These data indicate that, during spontaneous breathing, recruitment of HG motor units plays a critical role in the control of muscle force, with firing rate modulation playing an important but lesser role.

  14. A study of speech interfaces for the vehicle environment.

    Science.gov (United States)

    2013-05-01

    Over the past few years, there has been a shift in automotive human machine interfaces from : visual-manual interactions (pushing buttons and rotating knobs) to speech interaction. In terms of : distraction, the industry views speech interaction as a...

  15. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor

    Directory of Open Access Journals (Sweden)

    Heracleous Panikos

    2007-01-01

    Full Text Available We present the use of stethoscope and silicon NAM (nonaudible murmur microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible speech, but also very quietly uttered speech (nonaudible murmur. As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc. for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.

  16. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  17. Neuroanatomical correlates of childhood apraxia of speech: A connectomic approach.

    Science.gov (United States)

    Fiori, Simona; Guzzetta, Andrea; Mitra, Jhimli; Pannek, Kerstin; Pasquariello, Rosa; Cipriani, Paola; Tosetti, Michela; Cioni, Giovanni; Rose, Stephen E; Chilosi, Anna

    2016-01-01

    Childhood apraxia of speech (CAS) is a paediatric speech sound disorder in which precision and consistency of speech movements are impaired. Most children with idiopathic CAS have normal structural brain MRI. We hypothesize that children with CAS have altered structural connectivity in speech/language networks compared to controls and that these altered connections are related to functional speech/language measures. Whole brain probabilistic tractography, using constrained spherical deconvolution, was performed for connectome generation in 17 children with CAS and 10 age-matched controls. Fractional anisotropy (FA) was used as a measure of connectivity and the connections with altered FA between CAS and controls were identified. Further, the relationship between altered FA and speech/language scores was determined. Three intra-hemispheric/interhemispheric subnetworks showed reduction of FA in CAS compared to controls, including left inferior (opercular part) and superior (dorsolateral, medial and orbital part) frontal gyrus, left superior and middle temporal gyrus and left post-central gyrus (subnetwork 1); right supplementary motor area, left middle and inferior (orbital part) frontal gyrus, left precuneus and cuneus, right superior occipital gyrus and right cerebellum (subnetwork 2); right angular gyrus, right superior temporal gyrus and right inferior occipital gyrus (subnetwork 3). Reduced FA of some connections correlated with diadochokinesis, oromotor skills, expressive grammar and poor lexical production in CAS. These findings provide evidence of structural connectivity anomalies in children with CAS across specific brain regions involved in speech/language function. We propose altered connectivity as a possible epiphenomenon of complex pathogenic mechanisms in CAS which need further investigation.

  18. Spatial resolution dependence on spectral frequency in human speech cortex electrocorticography

    Science.gov (United States)

    Muller, Leah; Hamilton, Liberty S.; Edwards, Erik; Bouchard, Kristofer E.; Chang, Edward F.

    2016-10-01

    Objective. Electrocorticography (ECoG) has become an important tool in human neuroscience and has tremendous potential for emerging applications in neural interface technology. Electrode array design parameters are outstanding issues for both research and clinical applications, and these parameters depend critically on the nature of the neural signals to be recorded. Here, we investigate the functional spatial resolution of neural signals recorded at the human cortical surface. We empirically derive spatial spread functions to quantify the shared neural activity for each frequency band of the electrocorticogram. Approach. Five subjects with high-density (4 mm center-to-center spacing) ECoG grid implants participated in speech perception and production tasks while neural activity was recorded from the speech cortex, including superior temporal gyrus, precentral gyrus, and postcentral gyrus. The cortical surface field potential was decomposed into traditional EEG frequency bands. Signal similarity between electrode pairs for each frequency band was quantified using a Pearson correlation coefficient. Main results. The correlation of neural activity between electrode pairs was inversely related to the distance between the electrodes; this relationship was used to quantify spatial falloff functions for cortical subdomains. As expected, lower frequencies remained correlated over larger distances than higher frequencies. However, both the envelope and phase of gamma and high gamma frequencies (30-150 Hz) are largely uncorrelated (<90%) at 4 mm, the smallest spacing of the high-density arrays. Thus, ECoG arrays smaller than 4 mm have significant promise for increasing signal resolution at high frequencies, whereas less additional gain is achieved for lower frequencies. Significance. Our findings quantitatively demonstrate the dependence of ECoG spatial resolution on the neural frequency of interest. We demonstrate that this relationship is consistent across patients and

  19. Movement goals and feedback and feedforward control mechanisms in speech production.

    Science.gov (United States)

    Perkell, Joseph S

    2012-09-01

    Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.

  20. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  1. Integrating speech technology to meet crew station design requirements

    Science.gov (United States)

    Simpson, Carol A.; Ruth, John C.; Moore, Carolyn A.

    The last two years have seen improvements in speech generation and speech recognition technology that make speech I/O for crew station controls and displays viable for operational systems. These improvements include increased robustness of algorithm performance in high levels of background noise, increased vocabulary size, improved performance in the connected speech mode, and less speaker dependence. This improved capability makes possible far more sophisticated user interface design than was possible with earlier technology. Engineering, linguistic, and human factors design issues are discussed in the context of current voice I/O technology performance.

  2. Methods for eliciting, annotating, and analyzing databases for child speech development.

    Science.gov (United States)

    Beckman, Mary E; Plummer, Andrew R; Munson, Benjamin; Reidy, Patrick F

    2017-09-01

    Methods from automatic speech recognition (ASR), such as segmentation and forced alignment, have facilitated the rapid annotation and analysis of very large adult speech databases and databases of caregiver-infant interaction, enabling advances in speech science that were unimaginable just a few decades ago. This paper centers on two main problems that must be addressed in order to have analogous resources for developing and exploiting databases of young children's speech. The first problem is to understand and appreciate the differences between adult and child speech that cause ASR models developed for adult speech to fail when applied to child speech. These differences include the fact that children's vocal tracts are smaller than those of adult males and also changing rapidly in size and shape over the course of development, leading to between-talker variability across age groups that dwarfs the between-talker differences between adult men and women. Moreover, children do not achieve fully adult-like speech motor control until they are young adults, and their vocabularies and phonological proficiency are developing as well, leading to considerably more within-talker variability as well as more between-talker variability. The second problem then is to determine what annotation schemas and analysis techniques can most usefully capture relevant aspects of this variability. Indeed, standard acoustic characterizations applied to child speech reveal that adult-centered annotation schemas fail to capture phenomena such as the emergence of covert contrasts in children's developing phonological systems, while also revealing children's nonuniform progression toward community speech norms as they acquire the phonological systems of their native languages. Both problems point to the need for more basic research into the growth and development of the articulatory system (as well as of the lexicon and phonological system) that is oriented explicitly toward the construction of

  3. Psychoacoustic cues to emotion in speech prosody and music.

    Science.gov (United States)

    Coutinho, Eduardo; Dibben, Nicola

    2013-01-01

    There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.

  4. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  5. Human duodenal motor activity in response to acid and different nutrients

    NARCIS (Netherlands)

    Schwartz, M. P.; Samsom, M.; Smout, A. J.

    2001-01-01

    Duodenal motor activity in response to intraduodenal infusion of small volumes of acid and nutrients of different chemical composition was studied in 10 healthy humans, using a water-perfused catheter incorporating 20 antropyloroduodenal sideholes. Saline and dextrose did not affect motility. Acid

  6. Private speech of learning disabled and normally achieving children in classroom academic and laboratory contexts.

    Science.gov (United States)

    Berk, L E; Landau, S

    1993-04-01

    Learning disabled (LD) children are often targets for cognitive-behavioral interventions designed to train them in effective use of a self-directed speech. The purpose of this study was to determine if, indeed, these children display immature private speech in the naturalistic classroom setting. Comparisons were made of the private speech, motor accompaniment to task, and attention of LD and normally achieving classmates during academic seatwork. Setting effects were examined by comparing classroom data with observations during academic seatwork and puzzle solving in the laboratory. Finally, a subgroup of LD children symptomatic of attention-deficit hyperactivity disorder (ADHD) was compared with pure LD and normally achieving controls to determine if the presumed immature private speech is a function of a learning disability or externalizing behavior problems. Results indicated that LD children used more task-relevant private speech than controls, an effect that was especially pronounced for the LD/ADHD subgroup. Use of private speech was setting- and task-specific. Implications for intervention and future research methodology are discussed.

  7. Addition of Kinesio Taping of the orbicularis oris muscles to speech therapy rapidly improves drooling in children with neurological disorders.

    Science.gov (United States)

    Mikami, Denise Lica Yoshimura; Furia, Cristina Lemos Barbosa; Welker, Alexis Fonseca

    2017-09-21

    To evaluate the effects of Kinesio Taping (KT) of the orbicularis oris muscles as an adjunct to standard therapy for drooling. Fifteen children with neurological disorders and drooling received speech therapy and twice-weekly KT of the orbicularis muscles over a 30-day period. Drooling was assessed by six parameters: impact on the life of the child and caregiver; severity of drooling; frequency of drooling; drooling volume (estimated by number of bibs used); salivary leak; and interlabial gap. Seven markers of oral motor skills were also assessed. KT of the orbicularis oris region reduced the interlabial gap. All oral motor skills and almost all markers of drooling improved after 15 days of treatment. In this sample of children with neurological disorders, adding KT of the orbicularis oris muscles to speech therapy caused rapid improvement in oral motor skills and drooling.

  8. An Integrated Evaluation of Nonspeech Oral Motor Treatments

    Science.gov (United States)

    Powell, Thomas W.

    2008-01-01

    Purpose: This article functions as an epilogue to the clinical forum examining the use of nonspeech oral motor treatments (NSOMTs) to remediate speech sound disorders in children. Method: Conclusions to eight clinical questions are formed based on the findings that were reported in the clinical forum. Theoretical and clinical challenges are also…

  9. Motor neuron disease: the impact of decreased speech intelligibility ...

    African Journals Online (AJOL)

    Background: The onset of motor neuron disease (MND), a neurodegenerative disease, results in physical and communication disabilities that impinge on an individual's ability to remain functionally independent. Multiple aspects of the marital relationship are affected by the continuously changing roles and responsibilities.

  10. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  11. Human myosin VIIa is a very slow processive motor protein on various cellular actin structures.

    Science.gov (United States)

    Sato, Osamu; Komatsu, Satoshi; Sakai, Tsuyoshi; Tsukasaki, Yoshikazu; Tanaka, Ryosuke; Mizutani, Takeomi; Watanabe, Tomonobu M; Ikebe, Reiko; Ikebe, Mitsuo

    2017-06-30

    Human myosin VIIa (MYO7A) is an actin-linked motor protein associated with human Usher syndrome (USH) type 1B, which causes human congenital hearing and visual loss. Although it has been thought that the role of human myosin VIIa is critical for USH1 protein tethering with actin and transportation along actin bundles in inner-ear hair cells, myosin VIIa's motor function remains unclear. Here, we studied the motor function of the tail-truncated human myosin VIIa dimer (HM7AΔTail/LZ) at the single-molecule level. We found that the HM7AΔTail/LZ moves processively on single actin filaments with a step size of 35 nm. Dwell-time distribution analysis indicated an average waiting time of 3.4 s, yielding ∼0.3 s -1 for the mechanical turnover rate; hence, the velocity of HM7AΔTail/LZ was extremely slow, at 11 nm·s -1 We also examined HM7AΔTail/LZ movement on various actin structures in demembranated cells. HM7AΔTail/LZ showed unidirectional movement on actin structures at cell edges, such as lamellipodia and filopodia. However, HM7AΔTail/LZ frequently missed steps on actin tracks and exhibited bidirectional movement at stress fibers, which was not observed with tail-truncated myosin Va. These results suggest that the movement of the human myosin VIIa motor protein is more efficient on lamellipodial and filopodial actin tracks than on stress fibers, which are composed of actin filaments with different polarity, and that the actin structures influence the characteristics of cargo transportation by human myosin VIIa. In conclusion, myosin VIIa movement appears to be suitable for translocating USH1 proteins on stereocilia actin bundles in inner-ear hair cells. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  13. Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.

    Science.gov (United States)

    Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F

    2017-08-16

    Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the

  14. Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Daws, Richard; Payne, Heather; Blott, Jonathan; Marshall, Chloë; MacSweeney, Mairéad

    2015-12-01

    Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. The Neural Basis of Vocal Pitch Imitation in Humans.

    Science.gov (United States)

    Belyk, Michel; Pfordresher, Peter Q; Liotti, Mario; Brown, Steven

    2016-04-01

    Vocal imitation is a phenotype that is unique to humans among all primate species, and so an understanding of its neural basis is critical in explaining the emergence of both speech and song in human evolution. Two principal neural models of vocal imitation have emerged from a consideration of nonhuman animals. One hypothesis suggests that putative mirror neurons in the inferior frontal gyrus pars opercularis of Broca's area may be important for imitation. An alternative hypothesis derived from the study of songbirds suggests that the corticostriate motor pathway performs sensorimotor processes that are specific to vocal imitation. Using fMRI with a sparse event-related sampling design, we investigated the neural basis of vocal imitation in humans by comparing imitative vocal production of pitch sequences with both nonimitative vocal production and pitch discrimination. The strongest difference between these tasks was found in the putamen bilaterally, providing a striking parallel to the role of the analogous region in songbirds. Other areas preferentially activated during imitation included the orofacial motor cortex, Rolandic operculum, and SMA, which together outline the corticostriate motor loop. No differences were seen in the inferior frontal gyrus. The corticostriate system thus appears to be the central pathway for vocal imitation in humans, as predicted from an analogy with songbirds.

  16. Discharge patterns of human genioglossus motor units during arousal from sleep.

    Science.gov (United States)

    Wilkinson, Vanessa; Malhotra, Atul; Nicholas, Christian L; Worsnop, Christopher; Jordan, Amy S; Butler, Jane E; Saboisky, Julian P; Gandevia, Simon C; White, David P; Trinder, John

    2010-03-01

    Single motor unit recordings of the human genioglossus muscle reveal motor units with a variety of discharge patterns. Integrated multiunit electromyographic recordings of genioglossus have demonstrated an abrupt increase in the muscle's activity at arousal from sleep. The aim of the present study was to determine the effect of arousal from sleep on the activity of individual motor units as a function of their particular discharge pattern. Genioglossus activity was measured using intramuscular fine-wire electrodes inserted via a percutaneous approach. Arousals from sleep were identified using the ASDA criterion and the genioglossus electromyogram recordings analyzed for single motor unit activity. Sleep research laboratory. Sleep and respiratory data were collected in 8 healthy subjects (6 men). 138 motor units were identified during prearousalarousal sleep: 25% inspiratory phasic, 33% inspiratory tonic, 4% expiratory phasic, 3% expiratory tonic, and 35% tonic. At arousal from sleep inspiratory phasic units significantly increased the proportion of a breath over which they were active, but did not appreciably increase their rate of firing. 80 new units were identified at arousals, 75% were inspiratory, many of which were active for only 1 or 2 breaths. 22% of units active before arousal, particularly expiratory and tonic units, stopped at the arousal. Increased genioglossus muscle activity at arousal from sleep is primarily due to recruitment of inspiratory phasic motor units. Further, activity within the genioglossus motoneuron pool is reorganized at arousal as, in addition to recruitment, approximately 20% of units active before arousals stopped firing.

  17. Automatic Human Movement Assessment With Switching Linear Dynamic System: Motion Segmentation and Motor Performance.

    Science.gov (United States)

    de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro

    2017-06-01

    Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).

  18. Potentials of speech disorders correction in 4-6 yrs children by means of ergo and art therapy

    Directory of Open Access Journals (Sweden)

    N. B. Petrenko

    2017-04-01

    Full Text Available Purpose: to work out methodic of speech disorders correction in 4-6 yrs children by ergo and art therapy means. Material: during academic year three groups of children (n=97 were being observed: two groups - with speech disorders (control and main and one group of healthy children. Psycho-motor and cognitive functions were assessed with the help of tests for motor coordination (speed of their fulfillment, verbal thinking. Results: it was found that characteristic feature of such children is critical estimation of own speech insufficiency and conscious avoiding oral answers. By cluster analysis results increase of homogeneity in psycho-physical condition’s positive changes, cognitive functions and dance abilities resulted from dance-correction training program were shown. Conclusions: the worked out dance-correction choreographic trainings helps in the following: developing rhythm sense; strengthening of skeleton and muscles; memory, attention, thinking and imagination simulation. Acquiring of such experience will help a child to further successfully train different art-creative and sports kinds of activities; to master choreography and gymnastic as well as different musical instruments.

  19. Segregating polymorphisms of FOXP2 are associated with measures of inner speech, speech fluency and strength of handedness in a healthy population.

    Science.gov (United States)

    Crespi, Bernard; Read, Silven; Hurd, Peter

    2017-10-01

    We genotyped a healthy population for three haplotype-tagging FOXP2 SNPs, and tested for associations of these SNPs with strength of handedness and questionnaire-based metrics of inner speech characteristics (ISP) and speech fluency (FLU), as derived from the Schizotypal Personality Questionnaire-BR. Levels of mixed-handedness were positively correlated with ISP and FLU, supporting prior work on these two domains. Genotype for rs7799109, a SNP previously linked with lateralization of left frontal regions underlying language, was associated with degree of mixed handedness and with scores for ISP and FLU phenotypes. Genotype of rs1456031, which has previously been linked with auditory hallucinations, was also associated with ISP phenotypes. These results provide evidence that FOXP2 SNPs influence aspects of human inner speech and fluency that are related to lateralized phenotypes, and suggest that the evolution of human language, as mediated by the adaptive evolution of FOXP2, involved features of inner speech. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Using Web Speech Technology with Language Learning Applications

    Science.gov (United States)

    Daniels, Paul

    2015-01-01

    In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…

  1. The feasibility of miniaturizing the versatile portable speech prosthesis: A market survey of commercial products

    Science.gov (United States)

    Walklet, T.

    1981-01-01

    The feasibility of a miniature versatile portable speech prosthesis (VPSP) was analyzed and information on its potential users and on other similar devices was collected. The VPSP is a device that incorporates speech synthesis technology. The objective is to provide sufficient information to decide whether there is valuable technology to contribute to the miniaturization of the VPSP. The needs of potential users are identified, the development status of technologies similar or related to those used in the VPSP are evaluated. The VPSP, a computer based speech synthesis system fits on a wheelchair. The purpose was to produce a device that provides communication assistance in educational, vocational, and social situations to speech impaired individuals. It is expected that the VPSP can be a valuable aid for persons who are also motor impaired, which explains the placement of the system on a wheelchair.

  2. Changes in speech production in a child with a cochlear implant: acoustic and kinematic evidence.

    Science.gov (United States)

    Goffman, Lisa; Ertmer, David J; Erdle, Christa

    2002-10-01

    A method is presented for examining change in motor patterns used to produce linguistic contrasts. In this case study, the method is applied to a child receiving new auditory input following cochlear implantation. This child experienced hearing loss at age 3 years and received a multichannel cochlear implant at age 7 years. Data collection points occurred both pre- and postimplant and included acoustic and kinematic analyses. Overall, this child's speech output was transcribed as accurate across the pre- and postimplant periods. Postimplant, with the onset of new auditory experience, acoustic durations showed a predictable maturational change, usually decreasing in duration. Conversely, the spatiotemporal stability of speech movements initially became more variable postimplantation. The auditory perturbations experienced by this child during development led to changes in the physiological underpinnings of speech production, even when speech output was perceived as accurate.

  3. Comparison of Classification Methods for Detecting Emotion from Mandarin Speech

    Science.gov (United States)

    Pao, Tsang-Long; Chen, Yu-Te; Yeh, Jun-Heng

    It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.

  4. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  5. Language for action: Motor resonance during the processing of human and robotic voices.

    Science.gov (United States)

    Di Cesare, G; Errante, A; Marchi, M; Cuccio, V

    2017-11-01

    In this fMRI study we evaluated whether the auditory processing of action verbs pronounced by a human or a robotic voice in the imperative mood differently modulates the activation of the mirror neuron system (MNs). The study produced three results. First, the activation pattern found during listening to action verbs was very similar in both the robot and human conditions. Second, the processing of action verbs compared to abstract verbs determined the activation of the fronto-parietal circuit classically involved during the action goal understanding. Third, and most importantly, listening to action verbs compared to abstract verbs produced activation of the anterior part of the supramarginal gyrus (aSMG) regardless of the condition (human and robot) and in the absence of any object name. The supramarginal gyrus is a region considered to underpin hand-object interaction and associated to the processing of affordances. These results suggest that listening to action verbs may trigger the recruitment of motor representations characterizing affordances and action execution, coherently with the predictive nature of motor simulation that not only allows us to re-enact motor knowledge to understand others' actions but also prepares us for the actions we might need to carry out. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Vocal Tract Images Reveal Neural Representations of Sensorimotor Transformation During Speech Imitation

    Science.gov (United States)

    Carey, Daniel; Miquel, Marc E.; Evans, Bronwen G.; Adank, Patti; McGettigan, Carolyn

    2017-01-01

    Abstract Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants’ vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST. PMID:28334401

  7. RELATIONSHIP BETWEEN LINGUISTIC UNITS AND MOTOR COMMANDS.

    Science.gov (United States)

    FROMKIN, VICTORIA A.

    ASSUMING THAT SPEECH IS THE RESULT OF A NUMBER OF DISCRETE NEUROMUSCULAR EVENTS AND THAT THE BRAIN CAN STORE ONLY A LIMITED NUMBER OF MOTOR COMMANDS WITH WHICH TO CONTROL THESE EVENTS, THE RESEARCH REPORTED IN THIS PAPER WAS DIRECTED TO A DETERMINATION OF THE SIZE AND NATURE OF THE STORED ITEMS AND AN EXPLANATION OF HOW SPEAKERS ENCODE A SEQUENCE…

  8. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor

    Directory of Open Access Journals (Sweden)

    Hiroshi Saruwatari

    2007-01-01

    Full Text Available We present the use of stethoscope and silicon NAM (nonaudible murmur microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible speech, but also very quietly uttered speech (nonaudible murmur. As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc. for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a 93.9% word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.

  9. Human θ burst stimulation enhances subsequent motor learning and increases performance variability.

    Science.gov (United States)

    Teo, James T H; Swayne, Orlando B C; Cheeran, Binith; Greenwood, Richard J; Rothwell, John C

    2011-07-01

    Intermittent theta burst stimulation (iTBS) transiently increases motor cortex excitability in healthy humans by a process thought to involve synaptic long-term potentiation (LTP), and this is enhanced by nicotine. Acquisition of a ballistic motor task is likewise accompanied by increased excitability and presumed intracortical LTP. Here, we test how iTBS and nicotine influences subsequent motor learning. Ten healthy subjects participated in a double-blinded placebo-controlled trial testing the effects of iTBS and nicotine. iTBS alone increased the rate of learning but this increase was blocked by nicotine. We then investigated factors other than synaptic strengthening that may play a role. Behavioral analysis and modeling suggested that iTBS increased performance variability, which correlated with learning outcome. A control experiment confirmed the increase in motor output variability by showing that iTBS increased the dispersion of involuntary transcranial magnetic stimulation-evoked thumb movements. We suggest that in addition to the effect on synaptic plasticity, iTBS may have facilitated performance by increasing motor output variability; nicotine negated this effect on variability perhaps via increasing the signal-to-noise ratio in cerebral cortex.

  10. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  11. Structural analysis of a speech disorder of children with a mild mental retardation

    Directory of Open Access Journals (Sweden)

    Franc Smole

    2004-05-01

    Full Text Available The aim of this research was to define the structure of speech disorder of children with a mild mental retardation. 100 subjects were chosen among pupils from the 1st to the 4th grade of elementary school who were under logopaedic treatment. To determine speech comprehension Reynell's developmental scales were used and for evaluation of speech articulation the Three-position test for articulation evaluation. With the Bender test we determined a child's mental age as well as defined the signs of psychological disfunction of organic nature. For the field of phonological consciousness a Test of reading and writing disturbances was applied. Speech fluency was evaluated by the Riley test. Evaluation scales were adapted for determining speech-language levels and motor skills of speech organs and hands. Data on results in psychological test and on the family was summed up from the diagnostic treatment guidance documents. Social behaviour in school was evaluated by their teachers. Six factors which hierarchicallydefine the structure of speech disorder were determined by the factor analysis. We found out that signs of a child's brain lesion are the factor which has the most influence on a child's mental age. The results of this research might be helpful to logopaedists in determining a logopaedic treatment for children with a mild mental retardation.

  12. Dysarthria of Motor Neuron Disease: Clinician Judgments of Severity.

    Science.gov (United States)

    Seikel, J. Anthony; And Others

    1990-01-01

    This study investigated the relationship between the temporal-acoustic parameters of the speech of 15 adults with motor neuron disease. Differences in predictions of the progression of the disease and clinician judgments of dysarthria severity were found to relate to the linguistic systems of both speaker and judge. (Author/JDD)

  13. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  14. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus.

    Directory of Open Access Journals (Sweden)

    Mary Flaherty

    Full Text Available The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated, Passive speech exposure (regular exposure to human speech, and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  15. Generating Expressive Speech for Storytelling Applications

    OpenAIRE

    Bailly, G.; Theune, Mariet; Meijs, Koen; Campbell, N.; Hamza, W.; Heylen, Dirk K.J.; Ordelman, Roeland J.F.; Hoge, H.; Jianhua, T.

    2006-01-01

    Work on expressive speech synthesis has long focused on the expression of basic emotions. In recent years, however, interest in other expressive styles has been increasing. The research presented in this paper aims at the generation of a storytelling speaking style, which is suitable for storytelling applications and more in general, for applications aimed at children. Based on an analysis of human storytellers' speech, we designed and implemented a set of prosodic rules for converting "neutr...

  16. Common cues to emotion in the dynamic facial expressions of speech and song.

    Science.gov (United States)

    Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

  17. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners......A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...

  18. Self-organizing map classifier for stressed speech recognition

    Science.gov (United States)

    Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav

    2016-05-01

    This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.

  19. Speech-like rhythm in a voiced and voiceless orangutan call.

    Directory of Open Access Journals (Sweden)

    Adriano R Lameira

    Full Text Available The evolutionary origins of speech remain obscure. Recently, it was proposed that speech derived from monkey facial signals which exhibit a speech-like rhythm of ∼5 open-close lip cycles per second. In monkeys, these signals may also be vocalized, offering a plausible evolutionary stepping stone towards speech. Three essential predictions remain, however, to be tested to assess this hypothesis' validity; (i Great apes, our closest relatives, should likewise produce 5Hz-rhythm signals, (ii speech-like rhythm should involve calls articulatorily similar to consonants and vowels given that speech rhythm is the direct product of stringing together these two basic elements, and (iii speech-like rhythm should be experience-based. Via cinematic analyses we demonstrate that an ex-entertainment orangutan produces two calls at a speech-like rhythm, coined "clicks" and "faux-speech." Like voiceless consonants, clicks required no vocal fold action, but did involve independent manoeuvring over lips and tongue. In parallel to vowels, faux-speech showed harmonic and formant modulations, implying vocal fold and supralaryngeal action. This rhythm was several times faster than orangutan chewing rates, as observed in monkeys and humans. Critically, this rhythm was seven-fold faster, and contextually distinct, than any other known rhythmic calls described to date in the largest database of the orangutan repertoire ever assembled. The first two predictions advanced by this study are validated and, based on parsimony and exclusion of potential alternative explanations, initial support is given to the third prediction. Irrespectively of the putative origins of these calls and underlying mechanisms, our findings demonstrate irrevocably that great apes are not respiratorily, articulatorilly, or neurologically constrained for the production of consonant- and vowel-like calls at speech rhythm. Orangutan clicks and faux-speech confirm the importance of rhythmic speech

  20. Emotion, affect and personality in speech the bias of language and paralanguage

    CERN Document Server

    Johar, Swati

    2016-01-01

    This book explores the various categories of speech variation and works to draw a line between linguistic and paralinguistic phenomenon of speech. Paralinguistic contrast is crucial to human speech but has proven to be one of the most difficult tasks in speech systems. In the quest for solutions to speech technology and sciences, this book narrows down the gap between speech technologists and phoneticians and emphasizes the imperative efforts required to accomplish the goal of paralinguistic control in speech technology applications and the acute need for a multidisciplinary categorization system. This interdisciplinary work on paralanguage will not only serve as a source of information but also a theoretical model for linguists, sociologists, psychologists, phoneticians and speech researchers.