WorldWideScience

Sample records for delayed speech development

  1. Delayed speech development in children: Introduction to terminology

    Directory of Open Access Journals (Sweden)

    M. Yu. Bobylova

    2017-01-01

    Full Text Available There has been recently an increase in the number of children diagnosed with delayed speech development. There is delay compensation with age, but mild deficiency often remains for life. Delayed speech development is more common in boys than in girls. Its etiology is unknown in most cases, so a child should be followed up to make an accurate diagnosis. Genetic predisposition or environmental factors frequently influence speech development. The course of its delays is various. In the history of a number of disorders (childhood disintegrative disorder, Landau–Kleffner syndrome, there is evidence for the normal development of speech to a certain period and then stops or even regresses. By way of comparison, there are generally speech developmental changes in autism even during the preverbal stage (a complex of revival fails to form; babbling is poor, low emotional, gibberish; at the same time, the baby recipes whole phrases without using them to communicate. These speech disorders are considered not only as a delay, but also as a developmental abnormality. Speech disorders in children should be diagnosed as early as possible in order to initiative corrective measures in time. In this case, a physician makes a diagnosis and a special education teacher does corrective work. The successful collaboration and mutual understanding of the specialists in these areas will determine quality of life for a child in the future. This paper focusses on the terminology and classification of delays, which are necessary for physicians and teachers to speak the same language.

  2. Speech and Language Delay

    Science.gov (United States)

    ... OTC Relief for Diarrhea Home Diseases and Conditions Speech and Language Delay Condition Speech and Language Delay Share Print Table of Contents1. ... Treatment6. Everyday Life7. Questions8. Resources What is a speech and language delay? A speech and language delay ...

  3. Speech and language development in cognitively delayed children with cochlear implants.

    Science.gov (United States)

    Holt, Rachael Frush; Kirk, Karen Iler

    2005-04-01

    The primary goals of this investigation were to examine the speech and language development of deaf children with cochlear implants and mild cognitive delay and to compare their gains with those of children with cochlear implants who do not have this additional impairment. We retrospectively examined the speech and language development of 69 children with pre-lingual deafness. The experimental group consisted of 19 children with cognitive delays and no other disabilities (mean age at implantation = 38 months). The control group consisted of 50 children who did not have cognitive delays or any other identified disability. The control group was stratified by primary communication mode: half used total communication (mean age at implantation = 32 months) and the other half used oral communication (mean age at implantation = 26 months). Children were tested on a variety of standard speech and language measures and one test of auditory skill development at 6-month intervals. The results from each test were collapsed from blocks of two consecutive 6-month intervals to calculate group mean scores before implantation and at 1-year intervals after implantation. The children with cognitive delays and those without such delays demonstrated significant improvement in their speech and language skills over time on every test administered. Children with cognitive delays had significantly lower scores than typically developing children on two of the three measures of receptive and expressive language and had significantly slower rates of auditory-only sentence recognition development. Finally, there were no significant group differences in auditory skill development based on parental reports or in auditory-only or multimodal word recognition. The results suggest that deaf children with mild cognitive impairments benefit from cochlear implantation. Specifically, improvements are evident in their ability to perceive speech and in their reception and use of language. However, it may

  4. Effects of music therapy in the treatment of children with delayed speech development - results of a pilot study

    OpenAIRE

    Linden Ulrike; Groß Wibke; Ostermann Thomas

    2010-01-01

    Abstract Background Language development is one of the most significant processes of early childhood development. Children with delayed speech development are more at risk of acquiring other cognitive, social-emotional, and school-related problems. Music therapy appears to facilitate speech development in children, even within a short period of time. The aim of this pilot study is to explore the effects of music therapy in children with delayed speech development. Methods A total of 18 childr...

  5. Effects of music therapy in the treatment of children with delayed speech development - results of a pilot study

    Science.gov (United States)

    2010-01-01

    Background Language development is one of the most significant processes of early childhood development. Children with delayed speech development are more at risk of acquiring other cognitive, social-emotional, and school-related problems. Music therapy appears to facilitate speech development in children, even within a short period of time. The aim of this pilot study is to explore the effects of music therapy in children with delayed speech development. Methods A total of 18 children aged 3.5 to 6 years with delayed speech development took part in this observational study in which music therapy and no treatment were compared to demonstrate effectiveness. Individual music therapy was provided on an outpatient basis. An ABAB reversal design with alternations between music therapy and no treatment with an interval of approximately eight weeks between the blocks was chosen. Before and after each study period, a speech development test, a non-verbal intelligence test for children, and music therapy assessment scales were used to evaluate the speech development of the children. Results Compared to the baseline, we found a positive development in the study group after receiving music therapy. Both phonological capacity and the children's understanding of speech increased under treatment, as well as their cognitive structures, action patterns, and level of intelligence. Throughout the study period, developmental age converged with their biological age. Ratings according to the Nordoff-Robbins scales showed clinically significant changes in the children, namely in the areas of client-therapist relationship and communication. Conclusions This study suggests that music therapy may have a measurable effect on the speech development of children through the treatment's interactions with fundamental aspects of speech development, including the ability to form and maintain relationships and prosodic abilities. Thus, music therapy may provide a basic and supportive therapy for

  6. Effects of music therapy in the treatment of children with delayed speech development - results of a pilot study

    Directory of Open Access Journals (Sweden)

    Linden Ulrike

    2010-07-01

    Full Text Available Abstract Background Language development is one of the most significant processes of early childhood development. Children with delayed speech development are more at risk of acquiring other cognitive, social-emotional, and school-related problems. Music therapy appears to facilitate speech development in children, even within a short period of time. The aim of this pilot study is to explore the effects of music therapy in children with delayed speech development. Methods A total of 18 children aged 3.5 to 6 years with delayed speech development took part in this observational study in which music therapy and no treatment were compared to demonstrate effectiveness. Individual music therapy was provided on an outpatient basis. An ABAB reversal design with alternations between music therapy and no treatment with an interval of approximately eight weeks between the blocks was chosen. Before and after each study period, a speech development test, a non-verbal intelligence test for children, and music therapy assessment scales were used to evaluate the speech development of the children. Results Compared to the baseline, we found a positive development in the study group after receiving music therapy. Both phonological capacity and the children's understanding of speech increased under treatment, as well as their cognitive structures, action patterns, and level of intelligence. Throughout the study period, developmental age converged with their biological age. Ratings according to the Nordoff-Robbins scales showed clinically significant changes in the children, namely in the areas of client-therapist relationship and communication. Conclusions This study suggests that music therapy may have a measurable effect on the speech development of children through the treatment's interactions with fundamental aspects of speech development, including the ability to form and maintain relationships and prosodic abilities. Thus, music therapy may provide a basic

  7. Effects of music therapy in the treatment of children with delayed speech development - results of a pilot study.

    Science.gov (United States)

    Gross, Wibke; Linden, Ulrike; Ostermann, Thomas

    2010-07-21

    Language development is one of the most significant processes of early childhood development. Children with delayed speech development are more at risk of acquiring other cognitive, social-emotional, and school-related problems. Music therapy appears to facilitate speech development in children, even within a short period of time. The aim of this pilot study is to explore the effects of music therapy in children with delayed speech development. A total of 18 children aged 3.5 to 6 years with delayed speech development took part in this observational study in which music therapy and no treatment were compared to demonstrate effectiveness. Individual music therapy was provided on an outpatient basis. An ABAB reversal design with alternations between music therapy and no treatment with an interval of approximately eight weeks between the blocks was chosen. Before and after each study period, a speech development test, a non-verbal intelligence test for children, and music therapy assessment scales were used to evaluate the speech development of the children. Compared to the baseline, we found a positive development in the study group after receiving music therapy. Both phonological capacity and the children's understanding of speech increased under treatment, as well as their cognitive structures, action patterns, and level of intelligence. Throughout the study period, developmental age converged with their biological age. Ratings according to the Nordoff-Robbins scales showed clinically significant changes in the children, namely in the areas of client-therapist relationship and communication. This study suggests that music therapy may have a measurable effect on the speech development of children through the treatment's interactions with fundamental aspects of speech development, including the ability to form and maintain relationships and prosodic abilities. Thus, music therapy may provide a basic and supportive therapy for children with delayed speech development

  8. SPEECH DELAY IN THE PRACTICE OF A PAEDIATRICIAN AND CHILD’S NEUROLOGIST

    Directory of Open Access Journals (Sweden)

    N. N. Zavadenko

    2015-01-01

    Full Text Available The article describes the main clinical forms and causes of speech delay in children. It presents modern data on the role of neurobiological factors in the speech delay pathogenesis, including early organic damage to the central nervous system due to the pregnancy and childbirth pathology, as well as genetic mechanisms. For early and accurate diagnosis of speech disorders in children, you need to consider normal patterns of speech development. The article presents indicators of pre-speech and speech development in children and describes the screening method for determining the speech delay. The main areas of complex correction are speech therapy, psycho-pedagogical and psychotherapeutic assistance, as well as pharmaceutical treatment. The capabilities of drug therapy for dysphasia (alalia are shown. 

  9. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: I. Development and Description of the Pause Marker

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The goal of this article (PM I) is to describe the rationale for and development of the Pause Marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech from speech delay. Method: The authors describe and prioritize 7 criteria with which to evaluate the research and clinical…

  10. Evaluation and management of the child with speech delay.

    Science.gov (United States)

    Leung, A K; Kao, C P

    1999-06-01

    A delay in speech development may be a symptom of many disorders, including mental retardation, hearing loss, an expressive language disorder, psychosocial deprivation, autism, elective mutism, receptive aphasia and cerebral palsy. Speech delay may be secondary to maturation delay or bilingualism. Being familiar with the factors to look for when taking the history and performing the physical examination allows physicians to make a prompt diagnosis. Timely detection and early intervention may mitigate the emotional, social and cognitive deficits of this disability and improve the outcome.

  11. Speech and language support: How physicians can identify and treat speech and language delays in the office setting.

    Science.gov (United States)

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society's Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children's speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation.

  12. Speech and language delay in two children: an unusual presentation of hyperthyroidism.

    Science.gov (United States)

    Sohal, Aman P S; Dasarathi, Madhuri; Lodh, Rajib; Cheetham, Tim; Devlin, Anita M

    2013-01-01

    Hyperthyroidism is rare in pre-school children. Untreated, it can have a profound effect on normal growth and development, particularly in the first 2 years of life. Although neurological manifestations of dysthyroid states are well known, specific expressive speech and language disorder as a presentation of hyperthyroidism is rarely documented. Case reports of two children with hyperthyroidism presenting with speech and language delay. We report two pre-school children with hyperthyroidism, who presented with expressive speech and language delay, and demonstrated a significant improvement in their language skills following treatment with anti-thyroid medication. Hyperthyroidism must be considered in all children presenting with speech and language difficulties, particularly expressive speech delay. Prompt recognition and early treatment are likely to improve outcome.

  13. Speech and language support: How physicians can identify and treat speech and language delays in the office setting

    Science.gov (United States)

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. The tool aimed to help physicians achieve three main goals: early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society’s Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children’s speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation. PMID:24627648

  14. Typical versus delayed speech onset influences verbal reporting of autistic interests.

    Science.gov (United States)

    Chiodo, Liliane; Majerus, Steve; Mottron, Laurent

    2017-01-01

    The distinction between autism and Asperger syndrome has been abandoned in the DSM-5. However, this clinical categorization largely overlaps with the presence or absence of a speech onset delay which is associated with clinical, cognitive, and neural differences. It is unknown whether these different speech development pathways and associated cognitive differences are involved in the heterogeneity of the restricted interests that characterize autistic adults. This study tested the hypothesis that speech onset delay, or conversely, early mastery of speech, orients the nature and verbal reporting of adult autistic interests. The occurrence of a priori defined descriptors for perceptual and thematic dimensions were determined, as well as the perceived function and benefits, in the response of autistic people to a semi-structured interview on their intense interests. The number of words, grammatical categories, and proportion of perceptual / thematic descriptors were computed and compared between groups by variance analyses. The participants comprised 40 autistic adults grouped according to the presence ( N  = 20) or absence ( N  = 20) of speech onset delay, as well as 20 non-autistic adults, also with intense interests, matched for non-verbal intelligence using Raven's Progressive Matrices. The overall nature, function, and benefit of intense interests were similar across autistic subgroups, and between autistic and non-autistic groups. However, autistic participants with a history of speech onset delay used more perceptual than thematic descriptors when talking about their interests, whereas the opposite was true for autistic individuals without speech onset delay. This finding remained significant after controlling for linguistic differences observed between the two groups. Verbal reporting, but not the nature or positive function, of intense interests differed between adult autistic individuals depending on their speech acquisition history: oral reporting of

  15. Parents' and speech and language therapists' explanatory models of language development, language delay and intervention.

    Science.gov (United States)

    Marshall, Julie; Goldbart, Juliet; Phillips, Julie

    2007-01-01

    Parental and speech and language therapist (SLT) explanatory models may affect engagement with speech and language therapy, but there has been dearth of research in this area. This study investigated parents' and SLTs' views about language development, delay and intervention in pre-school children with language delay. The aims were to describe, explore and explain the thoughts, understandings, perceptions, beliefs, knowledge and feelings held by: a group of parents from East Manchester, UK, whose pre-school children had been referred with suspected language delay; and SLTs working in the same area, in relation to language development, language delay and language intervention. A total of 24 unstructured interviews were carried out: 15 with parents whose children had been referred for speech and language therapy and nine with SLTs who worked with pre-school children. The interviews were transcribed verbatim and coded using Atlas/ti. The data were analysed, subjected to respondent validation, and grounded theories and principled descriptions developed to explain and describe parents' and SLTs' beliefs and views. Parent and SLT data are presented separately. There are commonalities and differences between the parents and the SLTs. Both groups believe that language development and delay are influenced by both external and internal factors. Parents give more weight to the role of gender, imitation and personality and value television and videos, whereas the SLTs value the 'right environment' and listening skills and consider that health/disability and socio-economic factors are important. Parents see themselves as experts on their child and have varied ideas about the role of SLTs, which do not always accord with SLTs' views. The parents and SLTs differ in their views of the roles of imitation and play in intervention. Parents typically try strategies before seeing an SLT. These data suggest that parents' ideas vary and that, although parents and SLTs may share some

  16. Delayed speech development, facial asymmetry, strabismus, and transverse ear lobe creases: a new syndrome?

    OpenAIRE

    Méhes, K

    1993-01-01

    A 4 year 9 month old boy and his 3 year 5 month old sister presented with delayed speech development, facial asymmetry, strabismus, and transverse ear lobe creases. The same features were found in their mother, but the father had no such anomalies. To our knowledge this familial association has not been described before and may represent an autosomal dominant syndrome.

  17. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: Introduction

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The goal of this article is to introduce the pause marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech (CAS) from speech delay.

  18. Hearing assessment in pre-school children with speech delay.

    Science.gov (United States)

    Psillas, George; Psifidis, Anestis; Antoniadou-Hitoglou, Magda; Kouloulas, Athanasios

    2006-09-01

    entities, especially of psychiatric nature. The children with profound sensorineural hearing loss exhibited more severe speech delay than those with moderate to severe. Regardless of etiology, the early identification and intervention contribute to positive outcome in this critical period of childhood for language development.

  19. Speech and language delay in children: A review and the role of a pediatric dentist

    Directory of Open Access Journals (Sweden)

    P Shetty

    2012-01-01

    Full Text Available Speech and language development is a useful indicator of a child′s overall development and cognitive ability. Identification of children at a risk for developmental delay or related problems may lead to intervention and assistance at a young age, when the chances for improvement are the best. This rationale supports screening of preschool children for speech and language delay or primary language impairment or disorder, which needs to be integrated into routine developmental surveillance practices of clinicians caring for children.

  20. Comparison of anxiety and child-care education characteristics of mothers who have children with or without speech delays.

    Science.gov (United States)

    Özdaş, Talih; Şahlı, Ayşe Sanem; Özdemir, Behiye Sarıkaya; Belgin, Erol

    2018-01-05

    Speech delay in a child could be the cause and/or result of the emotional disorder. The child rearing attitude that the parents have accepted could have both positive and negative effects on the personality of the child. The current study aimed to investigate the sociodemographic features and the mothers' anxiety of children with speech delay. One hundred five mothers with children aged between 3 and 6 years with speech delays were included in the patient group, and 105 mothers who have children aged between 3 and 6 years with normal speech and language development were included in the control group. An information form questionnaire including demographic characteristics, the Family Life and Childrearing Attitude Scale (PARI - Parental Attitude Research Instrument) and beck anxiety scale were requested from all mothers in the patient and the control groups. In the current study, there was a significant difference between the groups in terms of gender (p=0.001). According to Parental Attitude Research Instrument, the mean of mothers of the children with speech delays was higher than the mean of mothers of normal children in terms of the answers to overprotective mother aspect (poverprotective motherhood attitudes; however, the difference in terms of the answers to the aspects of democratic attitude and provision of equality, refusal to be a housewife, husband-wife conflict, and suppression and discipline were not statistically significant. The beck anxiety scale, a significant difference was detected between the two groups (p<0.01). It was found that the mothers of children with speech delays had more severe levels of anxiety. The social structure of the family, the attitudes and the behaviors of the mother, and the anxiety levels of the mothers have important effects on child development. Thus, it is necessary to perform further studies related to speech delays, in which many factors play a role in the etiology. Copyright © 2017 Associação Brasileira de

  1. Speech development delay in a child with foetal alcohol syndrome

    Directory of Open Access Journals (Sweden)

    Jacek Wilczyński

    2016-09-01

    Full Text Available A female foetus in her mother’s womb was exposed to high concentrations of alcohol at each stage of pregnancy on a long-term basis, which resulted in a permanent disability. In addition to a number of deficiencies in the overall functioning of the body of the child, there are serious problems pertaining to verbal communication. This thesis aims to describe foetal alcohol syndrome (FAS disease and present the basic problems with communication functions in a child, caused by damage of brain structures responsible for speech development. The thesis includes a speech diagnosis and therapy program adapted to the presented case. In the Discussion Section we have presented characteristics of communication disorders in case of children with FAS and the description of developmental malformations, neurobehavioral disorders, and environmental factors affecting the development of the child’s speech.

  2. Imitation of contrastive lexical stress in children with speech delay

    Science.gov (United States)

    Vick, Jennell C.; Moore, Christopher A.

    2005-09-01

    This study examined the relationship between acoustic correlates of stress in trochaic (strong-weak), spondaic (strong-strong), and iambic (weak-strong) nonword bisyllables produced by children (30-50) with normal speech acquisition and children with speech delay. Ratios comparing the acoustic measures (vowel duration, rms, and f0) of the first syllable to the second syllable were calculated to evaluate the extent to which each phonetic parameter was used to mark stress. In addition, a calculation of the variability of jaw movement in each bisyllable was made. Finally, perceptual judgments of accuracy of stress production were made. Analysis of perceptual judgments indicated a robust difference between groups: While both groups of children produced errors in imitating the contrastive lexical stress models (~40%), the children with normal speech acquisition tended to produce trochaic forms in substitution for other stress types, whereas children with speech delay showed no preference for trochees. The relationship between segmental acoustic parameters, kinematic variability, and the ratings of stress by trained listeners will be presented.

  3. Auditory Brainstem Response Wave Amplitude Characteristics as a Diagnostic Tool in Children with Speech Delay with Unknown Causes

    Directory of Open Access Journals (Sweden)

    Susan Abadi

    2016-09-01

    Full Text Available Speech delay with an unknown cause is a problem among children. This diagnosis is the last differential diagnosis after observing normal findings in routine hearing tests. The present study was undertaken to determine whether auditory brainstem responses to click stimuli are different between normally developing children and children suffering from delayed speech with unknown causes. In this cross-sectional study, we compared click auditory brainstem responses between 261 children who were clinically diagnosed with delayed speech with unknown causes based on normal routine auditory test findings and neurological examinations and had >12 months of speech delay (case group and 261 age- and sex-matched normally developing children (control group. Our results indicated that the case group exhibited significantly higher wave amplitude responses to click stimuli (waves I, III, and V than did the control group (P=0.001. These amplitudes were significantly reduced after 1 year (P=0.001; however, they were still significantly higher than those of the control group (P=0.001. The significant differences were seen regardless of the age and the sex of the participants. There were no statistically significant differences between the 2 groups considering the latency of waves I, III, and V. In conclusion, the higher amplitudes of waves I, III, and V, which were observed in the auditory brainstem responses to click stimuli among the patients with speech delay with unknown causes, might be used as a diagnostic tool to track patients’ improvement after treatment.

  4. Group delay functions and its applications in speech technology

    Indian Academy of Sciences (India)

    (iii) High resolution property: The (anti) resonance peaks (due to complex ... Resolving power of the group delay spectrum: z-plane (a, d, g), magnitude ...... speech signal into syllable-like units, without the knowledge of phonetic transcription.

  5. Significance of Joint Features Derived from the Modified Group Delay Function in Speech Processing

    Directory of Open Access Journals (Sweden)

    Murthy Hema A

    2007-01-01

    Full Text Available This paper investigates the significance of combining cepstral features derived from the modified group delay function and from the short-time spectral magnitude like the MFCC. The conventional group delay function fails to capture the resonant structure and the dynamic range of the speech spectrum primarily due to pitch periodicity effects. The group delay function is modified to suppress these spikes and to restore the dynamic range of the speech spectrum. Cepstral features are derived from the modified group delay function, which are called the modified group delay feature (MODGDF. The complementarity and robustness of the MODGDF when compared to the MFCC are also analyzed using spectral reconstruction techniques. Combination of several spectral magnitude-based features and the MODGDF using feature fusion and likelihood combination is described. These features are then used for three speech processing tasks, namely, syllable, speaker, and language recognition. Results indicate that combining MODGDF with MFCC at the feature level gives significant improvements for speech recognition tasks in noise. Combining the MODGDF and the spectral magnitude-based features gives a significant increase in recognition performance of 11% at best, while combining any two features derived from the spectral magnitude does not give any significant improvement.

  6. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  7. Screening for Speech and Language Delay in Children 5 Years Old and Younger: A Systematic Review.

    Science.gov (United States)

    Wallace, Ina F; Berkman, Nancy D; Watson, Linda R; Coyne-Beasley, Tamera; Wood, Charles T; Cullen, Katherine; Lohr, Kathleen N

    2015-08-01

    No recommendation exists for or against routine use of brief, formal screening instruments in primary care to detect speech and language delay in children through 5 years of age. This review aimed to update the evidence on screening and treating children for speech and language since the 2006 US Preventive Services Task Force systematic review. Medline, the Cochrane Library, PsycInfo, Cumulative Index to Nursing and Allied Health Literature, ClinicalTrials.gov, and reference lists. We included studies reporting diagnostic accuracy of screening tools and randomized controlled trials reporting benefits and harms of treatment of speech and language. Two independent reviewers extracted data, checked accuracy, and assigned quality ratings using predefined criteria. We found no evidence for the impact of screening on speech and language outcomes. In 23 studies evaluating the accuracy of screening tools, sensitivity ranged between 50% and 94%, and specificity ranged between 45% and 96%. Twelve treatment studies improved various outcomes in language, articulation, and stuttering; little evidence emerged for interventions improving other outcomes or for adverse effects of treatment. Risk factors associated with speech and language delay were male gender, family history, and low parental education. A limitation of this review is the lack of well-designed, well-conducted studies addressing whether screening for speech and language delay or disorders improves outcomes. Several screening tools can accurately identify children for diagnostic evaluations and interventions, but evidence is inadequate regarding applicability in primary care settings. Some treatments for young children identified with speech and language delays and disorders may be effective. Copyright © 2015 by the American Academy of Pediatrics.

  8. Delayed Referral in Children with Speech and Language Disorders for Rehabilitation Services

    Directory of Open Access Journals (Sweden)

    Roshanak Vameghi

    2015-03-01

    Full Text Available Objectives: Speech and language development is one of the main aspects of evolution in humans and is one of the most complex brain functions such that it is referred to as one of the highest cortical functions such as thinking, reading and writing. Speech and language disorders are considered as a major public health problem because they cause many secondary complications in the childhood and adulthood period which affect one’s socioeconomic status overall. Methods: This study was conducted in two phases. The first phase was to identify all potential factors influencing delay in referral of children with speech and language disorders for receiving rehabilitation services, based on literature as well as the families’ and experts’ points of view. In the second phase of the study which was designed in a case-control manner, actual factors influencing the time of referral were compared between two groups of participants. Results: Parental knowledge of their children's problems related to speech and language had no significant impact on the on-time referral for treatment for children with speech and language disorders. After the child definite diagnosis of speech and language disorders, parents’ information about the consequences of speech and language disorders, had a significant influence on early referral for speech and language pathology services. Discussion: In this study family structure plays an important role in the early identification of children with developmental disorders. Two-parent families had access to more resources than single-parent families. In addition, single-parent families may be more involved in the work and business of life.

  9. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  10. The Influence of Socio-Economic Status and Ethnicity on Speech and Language Development

    Science.gov (United States)

    Basit, Tehmina N.; Hughes, Amanda; Iqbal, Zafar; Cooper, Janet

    2015-01-01

    A number of factors influence the speech and language development of young children. Delays in the development of speech and language can have repercussions for school attainment and life chances. This paper is based on a survey of 3- to 4-year-old children in the city of Stoke-on-Trent in the UK. It analyses the data collected from 255 children…

  11. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    Science.gov (United States)

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  12. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech

  13. Otitis Media and Speech/Language Development in Late-Talkers.

    Science.gov (United States)

    Paul, Rhea; And Others

    This study examines otitis media as a possible factor associated with increased risk for communicative handicap in a group of children with a possible vulnerability for language delay: "late-talkers." Speech and language outcomes at ages 3 and 4 were examined in 28 late talkers and 24 children with normal language development. Late…

  14. Prelinguistic communication development in children with childhood apraxia of speech: a retrospective analysis.

    Science.gov (United States)

    Highman, Chantelle; Leitão, Suze; Hennessey, Neville; Piek, Jan

    2012-02-01

    In a retrospective study of prelinguistic communication development, clinically referred preschool children (n = 9) aged 3-4 years, who as infants had failed a community-based screening program, were evaluated for features of childhood apraxia of speech (CAS). Four children showed no features and either delayed or normal language, five had from three-to-seven CAS features and all exhibited delayed language. These children were matched by age with 21 children with typically-developing (TD) speech and language skills. Case-control comparisons of retrospective data from 9 months of age for two participants with more severe features of CAS at preschool age showed a dissociated pattern with low expressive quotients on the Receptive-Expressive Emergent Language Assessment-Second Edition (REEL-2) and records of infrequent babbling, but normal receptive quotients. However, other profiles were observed. Two children with milder CAS features showed poor receptive and expressive development similar to other clinically referred children with no CAS features, and one child with severe CAS features showed poor receptive but normal expressive developmental milestones at 9 months and records of frequent babbling. Results suggest some but not all children with features of suspected CAS have a selective deficit originating within speech motor development.

  15. The WNT2 Gene Polymorphism Associated with Speech Delay Inherent to Autism

    Science.gov (United States)

    Lin, Ping-I; Chien, Yi-Ling; Wu, Yu-Yu; Chen, Chia-Hsiang; Gau, Susan Shur-Fen; Huang, Yu-Shu; Liu, Shih-Kai; Tsai, Wen-Che; Chiu, Yen-Nan

    2012-01-01

    Previous evidence suggests that language function is modulated by genetic variants on chromosome 7q31-36. However, it is unclear whether this region harbors loci that contribute to speech delay in autism. We previously reported that the WNT2 gene located on 7q31 was associated with the risk of autism. Additionally, two other genes on 7q31-36,…

  16. Auditory-Motor Interactions in Pediatric Motor Speech Disorders: Neurocomputational Modeling of Disordered Development

    NARCIS (Netherlands)

    Terband, H.R.; Maassen, B.A.M.; Guenther, F.H.; Brumberg, J.

    2014-01-01

    Background/Purpose: Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between

  17. Auditory-motor interactions in pediatric motor speech disorders: Neurocomputational modeling of disordered development

    NARCIS (Netherlands)

    Terband, H.; Maassen, B.; Guenther, F. H.; Brumberg, J.

    2014-01-01

    BACKGROUND/PURPOSE: Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between

  18. Development and validation of a screening procedure to identify speech-language delay in toddlers with cleft palate

    DEFF Research Database (Denmark)

    Jørgensen, Line Dahl; Willadsen, Elisabeth

    2017-01-01

    condition based on assessment of consonant inventory using a real-time listening procedure in combination with parent-reported expressive vocabulary. These measures allowed evaluation of early speech-language skills found to correlate significantly with later speech-language difficulties in longitudinal......The purpose of this study was to develop and validate a clinically useful speech-language screening procedure for young children with cleft palate +/- cleft lip (CP) to identify those in need of speech-language intervention. Twenty-two children with CP were assigned to a +/- need for intervention...... studies of children with CP. The external validity of this screening procedure was evaluated by comparing the +/- need for intervention assignment determined by the screening procedure to experienced speech-language pathologists’ (SLPs’) clinical judgment of whether or not a child needed early...

  19. The Impact of Tympanostomy Tubes on Speech and Language Development in Children with Cleft Palate.

    Science.gov (United States)

    Shaffer, Amber D; Ford, Matthew D; Choi, Sukgi S; Jabbour, Noel

    2017-09-01

    Objective Describe the impact of hearing loss, tympanostomy tube placement before palatoplasty, and number of tubes received on speech outcomes in children with cleft palate. Study Design Case series with chart review. Setting Tertiary care children's hospital. Subjects and Methods Records from 737 children born between April 2005 and April 2015 who underwent palatoplasty at a tertiary children's hospital were reviewed. Exclusion criteria were cleft repair at an outside hospital, intact secondary palate, absence of postpalatoplasty speech evaluation, sensorineural or mixed hearing loss, no tubes, first tubes after palatoplasty, or first clinic after 12 months of age. Data from 152 patients with isolated cleft palate and 166 patients with cleft lip and palate were analyzed using Wilcoxon rank-sum, χ 2 , and Fisher exact test and logistic regression. Results Most patients (242, 76.1%) received tubes before palatoplasty. Hearing loss after tubes, but not before, was associated with speech/language delays at 24 months ( P = .005) and language delays ( P = .048) and speech sound production disorders (SSPDs, P = .040) at 5 years. Receiving tubes before palatoplasty was associated with failed newborn hearing screen ( P = .001) and younger age at first posttubes type B tympanogram with normal canal volume ( P = .015). Hearing loss after tubes ( P = .021), language delays ( P = .025), SSPDs ( P = .003), and velopharyngeal insufficiency ( P = .032) at 5 years and speech surgery ( P = .022) were associated with more tubes. Conclusion Continued middle ear disease, reflected by hearing loss and multiple tubes, may impair speech and language development. Inserting tubes before palatoplasty did not mitigate these impairments better than later tube placement.

  20. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    OpenAIRE

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2011-01-01

    In a sample of 46 children aged 4 to 7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants’ speech, prosody, and voice were compared with data from 40 typically-developing children, 13 preschool children with Speech Delay, and 15 participants aged 5 to 49 years with CAS in neurogenetic disorders. Speech Delay and Speech Errors, r...

  1. Factors Affecting Delayed Referral for Speech Therapy in Iranian children with Speech and Language Disorders

    Directory of Open Access Journals (Sweden)

    Roshanak Vameghi

    2014-03-01

    Full Text Available Objective: Early detection of children who are at risk for speech and language impairment and those at early stages of delay is crucial for provision of early intervention services. However, unfortunately in Iran, this disorder is not identified or referred for proper treatment and rehabilitation at early critical stages. Materials & Methods: This study was carried out in two phases. The first phase which was qualitative in nature was meant to identify all potentially affective factors through literature review as well as by acquiring the viewpoints of experts and families on this issue. Twelve experts and 9 parents of children with speech and language disorders participated in semi-structured in-depth interviews, thereby completing the first draft of potentially affective factors compiled through literature review. The completed list of factors finally led to the design of a questionnaire for identifying “factors affecting late referral in childhood speech and language impairment”. The questionnaire was approved for face and content validity. The cronbach’s alpha was determined to be 0.81. Two groups of parents were asked to complete the questionnaire: the parents of children who had attended speech and language clinics located on the west and central regions of Tehran city, after their child was 3 years old and those who had attended before their child was 3 years old, as the case and control group, respectively. Results: According to the results, among the seven factors which showed significant difference between the two groups of children before definite diagnosis of speech and language disorders was arrived for the child, 3 factors were related to the type of guidance and consultation received by the family from physicians, 2 factors were related to parents’ lack of awareness and knowledge, and 2 factors were related to the screening services received. All six factors showing significant difference between the two groups after

  2. Screening for speech and language delay in preschool children: systematic evidence review for the US Preventive Services Task Force.

    Science.gov (United States)

    Nelson, Heidi D; Nygren, Peggy; Walker, Miranda; Panoscha, Rita

    2006-02-01

    PEDIATRICS (ISSN Numbers: Print, 0031-4005; Online, 1098-4275). Published in the public domain by the American Academy of Pediatrics. Speech and language development is a useful indicator of a child's overall development and cognitive ability and is related to school success. Identification of children at risk for developmental delay or related problems may lead to intervention services and family assistance at a young age, when the chances for improvement are best. However, optimal methods for screening for speech and language delay have not been identified, and screening is practiced inconsistently in primary care. We sought to evaluate the strengths and limits of evidence about the effectiveness of screening and interventions for speech and language delay in preschool-aged children to determine the balance of benefits and adverse effects of routine screening in primary care for the development of guidelines by the US Preventive Services Task Force. The target population includes all children up to 5 years old without previously known conditions associated with speech and language delay, such as hearing and neurologic impairments. Studies were identified from Medline, PsycINFO, and CINAHL databases (1966 to November 19, 2004), systematic reviews, reference lists, and experts. The evidence review included only English-language, published articles that are available through libraries. Only randomized, controlled trials were considered for examining the effectiveness of interventions. Outcome measures were considered if they were obtained at any time or age after screening and/or intervention as long as the initial assessment occurred while the child was birth order, and family size. The performance characteristics of evaluation techniques that take or =2 screening techniques in 1 population, and comparisons of a single screening technique across different populations are lacking. Fourteen good- and fair-quality randomized, controlled trials of interventions

  3. Speech and language development in 2-year-old children with cerebral palsy.

    Science.gov (United States)

    Hustad, Katherine C; Allison, Kristen; McFadd, Emily; Riehle, Katherine

    2014-06-01

    We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Speech and language assessments were completed on 27 children with CP who were between the ages of 24 and 30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Two-step cluster analysis was used to identify homogeneous groups of children based on their performance on the seven dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. 85% of 2-year-old children with CP in this study had clinical speech and/or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment at or before 2 years of age.

  4. Oral Articulatory Control in Childhood Apraxia of Speech

    Science.gov (United States)

    Grigos, Maria I.; Moss, Aviva; Lu, Ying

    2015-01-01

    Purpose: The purpose of this research was to examine spatial and temporal aspects of articulatory control in children with childhood apraxia of speech (CAS), children with speech delay characterized by an articulation/phonological impairment (SD), and controls with typical development (TD) during speech tasks that increased in word length. Method:…

  5. Auditory-motor interactions in pediatric motor speech disorders: neurocomputational modeling of disordered development.

    Science.gov (United States)

    Terband, H; Maassen, B; Guenther, F H; Brumberg, J

    2014-01-01

    Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Reliance on auditory feedback in children with childhood apraxia of speech.

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P; Guarino, Anthony J; Green, Jordan R

    2015-01-01

    Children with childhood apraxia of speech (CAS) have been hypothesized to continuously monitor their speech through auditory feedback to minimize speech errors. We used an auditory masking paradigm to determine the effect of attenuating auditory feedback on speech in 30 children: 9 with CAS, 10 with speech delay, and 11 with typical development. The masking only affected the speech of children with CAS as measured by voice onset time and vowel space area. These findings provide preliminary support for greater reliance on auditory feedback among children with CAS. Readers of this article should be able to (i) describe the motivation for investigating the role of auditory feedback in children with CAS; (ii) report the effects of feedback attenuation on speech production in children with CAS, speech delay, and typical development, and (iii) understand how the current findings may support a feedforward program deficit in children with CAS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  7. The development of co-speech gesture and its semantic integration with speech in 6- to 12-year-old children with autism spectrum disorders.

    Science.gov (United States)

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lui, Ming; Yip, Virginia

    2015-11-01

    Previous work leaves open the question of whether children with autism spectrum disorders aged 6-12 years have delay in producing gestures compared to their typically developing peers. This study examined gestural production among school-aged children in a naturalistic context and how their gestures are semantically related to the accompanying speech. Delay in gestural production was found in children with autism spectrum disorders through their middle to late childhood. Compared to their typically developing counterparts, children with autism spectrum disorders gestured less often and used fewer types of gestures, in particular markers, which carry culture-specific meaning. Typically developing children's gestural production was related to language and cognitive skills, but among children with autism spectrum disorders, gestural production was more strongly related to the severity of socio-communicative impairment. Gesture impairment also included the failure to integrate speech with gesture: in particular, supplementary gestures are absent in children with autism spectrum disorders. The findings extend our understanding of gestural production in school-aged children with autism spectrum disorders during spontaneous interaction. The results can help guide new therapies for gestural production for children with autism spectrum disorders in middle and late childhood. © The Author(s) 2014.

  8. Auditory-Motor Interactions in Pediatric Motor Speech Disorders: Neurocomputational Modeling of Disordered Development

    Science.gov (United States)

    Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.

    2014-01-01

    Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630

  9. Speech Perception and Short-Term Memory Deficits in Persistent Developmental Speech Disorder

    Science.gov (United States)

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2006-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech…

  10. DEVELOPMENT AND DISORDERS OF SPEECH IN CHILDHOOD.

    Science.gov (United States)

    KARLIN, ISAAC W.; AND OTHERS

    THE GROWTH, DEVELOPMENT, AND ABNORMALITIES OF SPEECH IN CHILDHOOD ARE DESCRIBED IN THIS TEXT DESIGNED FOR PEDIATRICIANS, PSYCHOLOGISTS, EDUCATORS, MEDICAL STUDENTS, THERAPISTS, PATHOLOGISTS, AND PARENTS. THE NORMAL DEVELOPMENT OF SPEECH AND LANGUAGE IS DISCUSSED, INCLUDING THEORIES ON THE ORIGIN OF SPEECH IN MAN AND FACTORS INFLUENCING THE NORMAL…

  11. Start/End Delays of Voiced and Unvoiced Speech Signals

    Energy Technology Data Exchange (ETDEWEB)

    Herrnstein, A

    1999-09-24

    Recent experiments using low power EM-radar like sensors (e.g, GEMs) have demonstrated a new method for measuring vocal fold activity and the onset times of voiced speech, as vocal fold contact begins to take place. Similarly the end time of a voiced speech segment can be measured. Secondly it appears that in most normal uses of American English speech, unvoiced-speech segments directly precede or directly follow voiced-speech segments. For many applications, it is useful to know typical duration times of these unvoiced speech segments. A corpus, assembled earlier of spoken ''Timit'' words, phrases, and sentences and recorded using simultaneously measured acoustic and EM-sensor glottal signals, from 16 male speakers, was used for this study. By inspecting the onset (or end) of unvoiced speech, using the acoustic signal, and the onset (or end) of voiced speech using the EM sensor signal, the average duration times for unvoiced segments preceding onset of vocalization were found to be 300ms, and for following segments, 500ms. An unvoiced speech period is then defined in time, first by using the onset of the EM-sensed glottal signal, as the onset-time marker for the voiced speech segment and end marker for the unvoiced segment. Then, by subtracting 300ms from the onset time mark of voicing, the unvoiced speech segment start time is found. Similarly, the times for a following unvoiced speech segment can be found. While data of this nature have proven to be useful for work in our laboratory, a great deal of additional work remains to validate such data for use with general populations of users. These procedures have been useful for applying optimal processing algorithms over time segments of unvoiced, voiced, and non-speech acoustic signals. For example, these data appear to be of use in speaker validation, in vocoding, and in denoising algorithms.

  12. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: III. Theoretical Coherence of the Pause Marker with Speech Processing Deficits in Childhood Apraxia of Speech

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. Method: PM and other…

  13. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  14. THE ONTOGENESIS OF SPEECH DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. E. Braudo

    2017-01-01

    Full Text Available The purpose of this article is to acquaint the specialists, working with children having developmental disorders, with age-related norms for speech development. Many well-known linguists and psychologists studied speech ontogenesis (logogenesis. Speech is a higher mental function, which integrates many functional systems. Speech development in infants during the first months after birth is ensured by the innate hearing and emerging ability to fix the gaze on the face of an adult. Innate emotional reactions are also being developed during this period, turning into nonverbal forms of communication. At about 6 months a baby starts to pronounce some syllables; at 7–9 months – repeats various sounds combinations, pronounced by adults. At 10–11 months a baby begins to react on the words, referred to him/her. The first words usually appear at an age of 1 year; this is the start of the stage of active speech development. At this time it is acceptable, if a child confuses or rearranges sounds, distorts or misses them. By the age of 1.5 years a child begins to understand abstract explanations of adults. Significant vocabulary enlargement occurs between 2 and 3 years; grammatical structures of the language are being formed during this period (a child starts to use phrases and sentences. Preschool age (3–7 y. o. is characterized by incorrect, but steadily improving pronunciation of sounds and phonemic perception. The vocabulary increases; abstract speech and retelling are being formed. Children over 7 y. o. continue to improve grammar, writing and reading skills. The described stages may not have strict age boundaries, as soon as they are dependent not only on environment, but also on the child’s mental constitution, heredity and character.

  15. The Picture Exchange Communication System: Effects on Manding and Speech Development for School-Aged Children with Autism

    Science.gov (United States)

    Tincani, Matt; Crozier, Shannon; Alazett, Shannon

    2006-01-01

    We examined the effects of the Picture Exchange Communication System (PECS; Frost & Bondy, 2002) on the manding (requesting) and speech development of school-aged children with autism. In study 1, two participants, Damian and Bob, were taught PECS within a delayed multiple baseline design. Both participants demonstrated increased levels of manding…

  16. A novel method for assessing the development of speech motor function in toddlers with autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Katherine eSullivan

    2013-03-01

    Full Text Available There is increasing evidence to show that indicators other than socio-cognitive abilities might predict communicative function in Autism Spectrum Disorders (ASD. A potential area of research is the development of speech motor function in toddlers. Utilizing a novel measure called ‘articulatory features’, we assess the abilities of toddlers to produce sounds at different timescales as a metric of their speech motor skills. In the current study, we examined 1 whether speech motor function differed between toddlers with ASD, developmental delay, and typical development; and 2 whether differences in speech motor function are correlated with standard measures of language in toddlers with ASD. Our results revealed significant differences between a subgroup of the ASD population with poor verbal skills, and the other groups for the articulatory features associated with the shortest time scale, namely place of articulation, (p<0.05. We also found significant correlations between articulatory features and language and motor ability as assessed by the Mullen and the Vineland scales for the ASD group. Our findings suggest that articulatory features may be an additional measure of speech motor function that could potentially be useful as an early risk indicator of ASD.

  17. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  18. Speech Delay and Its Affecting Factors (Case Study in a Child with Initial Aq)

    Science.gov (United States)

    Syamsuardi

    2015-01-01

    Any parent wishes an appropriate development for their children. One of the parents' great concerns is the children's speech development; they are worried if their children are late to speak. The children's speech development is influenced by physical and environmental factors. The causes of physical factors are related to the problem but the role…

  19. Segregation of a 4p16.3 duplication with a characteristic appearance, macrocephaly, speech delay and mild intellectual disability in a 3-generation family

    DEFF Research Database (Denmark)

    Schönewolf-Greulich, Bitten; Ravn, Kirstine; Hamborg-Petersen, Bente

    2013-01-01

    delay/intellectual disability. In contrast small duplications of 4p are rare but with the advent of microarray techniques a few cases have been reported in recent years. Here we describe a 3 Mb duplication at 4p16.3 segregating with a characteristic phenotype, macrocephaly, speech delay and mild...

  20. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    Science.gov (United States)

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.

  1. Epoch-based analysis of speech signals

    Indian Academy of Sciences (India)

    on speech production characteristics, but also helps in accurate analysis of speech. .... include time delay estimation, speech enhancement from single and multi- ...... log. (. E[k]. ∑K−1 l=0. E[l]. ) ,. (7) where K is the number of samples in the ...

  2. Inner Speech: Development, Cognitive Functions, Phenomenology, and Neurobiology

    Science.gov (United States)

    2015-01-01

    Inner speech—also known as covert speech or verbal thinking—has been implicated in theories of cognitive development, speech monitoring, executive function, and psychopathology. Despite a growing body of knowledge on its phenomenology, development, and function, approaches to the scientific study of inner speech have remained diffuse and largely unintegrated. This review examines prominent theoretical approaches to inner speech and methodological challenges in its study, before reviewing current evidence on inner speech in children and adults from both typical and atypical populations. We conclude by considering prospects for an integrated cognitive science of inner speech, and present a multicomponent model of the phenomenon informed by developmental, cognitive, and psycholinguistic considerations. Despite its variability among individuals and across the life span, inner speech appears to perform significant functions in human cognition, which in some cases reflect its developmental origins and its sharing of resources with other cognitive processes. PMID:26011789

  3. Extensions to the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…

  4. The development of speech production in children with cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Chapman, Kathy

    2012-01-01

    The purpose of this chapter is to provide an overview of speech development of children with cleft palate +/- cleft lip. The chapter will begin with a discussion of the impact of clefting on speech. Next, we will provide a brief description of those factors impacting speech development...... for this population of children. Finally, research examining various aspects of speech development of infants and young children with cleft palate (birth to age five) will be reviewed. This final section will be organized by typical stages of speech sound development (e.g., prespeech, the early word stage...

  5. JNDS of interaural time delay (ITD) of selected frequency bands in speech and music signals

    Science.gov (United States)

    Aliphas, Avner; Colburn, H. Steven; Ghitza, Oded

    2002-05-01

    JNDS of interaural time delay (ITD) of selected frequency bands in the presence of other frequency bands have been reported for noiseband stimuli [Zurek (1985); Trahiotis and Bernstein (1990)]. Similar measurements will be reported for speech and music signals. When stimuli are synthesized with bandpass/band-stop operations, performance with complex stimuli are similar to noisebands (JNDS in tens or hundreds of microseconds); however, the resulting waveforms, when viewed through a model of the auditory periphery, show distortions (irregularities in phase and level) at the boundaries of the target band of frequencies. An alternate synthesis method based upon group-delay filtering operations does not show these distortions and is being used for the current measurements. Preliminary measurements indicate that when music stimuli are created using the new techniques, JNDS of ITDs are increased significantly compared to previous studies, with values on the order of milliseconds.

  6. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    Science.gov (United States)

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160

  7. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  8. Self-regulatory speech during planning and problem-solving in children with SLI and their typically developing peers.

    Science.gov (United States)

    Abdul Aziz, Safiyyah; Fletcher, Janet; Bayliss, Donna M

    2017-05-01

    Past research with children with specific language impairment (SLI) has shown them to have poorer planning and problem-solving ability, and delayed self-regulatory speech (SRS) relative to their typically developing (TD) peers. However, the studies are few in number and are restricted in terms of the number and age range of participants, which limits our understanding of the nature and extent of any delays. Moreover, no study has examined the performance of a significant subset of children with SLI, those who have hyperactive and inattentive behaviours. This cross-sectional study aimed to compare the performance of young children with SLI (aged 4-7 years) with that of their TD peers on a planning and problem-solving task and to examine the use of SRS while performing the task. Within each language group, the performance of children with and without hyperactive and inattentive behaviours was further examined. Children with SLI (n = 91) and TD children (n = 81), with and without hyperactive and inattentive behaviours across the three earliest school years (Kindergarten, Preprimary and Year 1) were video-taped while they completed the Tower of London (TOL), a planning and problem-solving task. Their recorded speech was coded and analysed to look at differences in SRS and its relation to TOL performance across the groups. Children with SLI scored lower on the TOL than TD children. Additionally, children with hyperactive and inattentive behaviours performed worse than those without hyperactive and inattentive behaviours, but only in the SLI group. This suggests that children with SLI with hyperactive and inattentive behaviours experience a double deficit. Children with SLI produced less inaudible muttering than TD children, and showed no reduction in social speech across the first three years of school. Finally, for children with SLI, a higher percentage performed better on the TOL when they used SRS than when they did not. The results point towards a significant delay

  9. Delayed Speech or Language Development

    Science.gov (United States)

    ... Parents Parents site Sitio para padres General Health Growth & Development Infections Diseases & Conditions Pregnancy & Baby Nutrition & Fitness Emotions & ... dada" well before their first birthday, and most toddlers can say about 20 words by the time ...

  10. Effects of Background Noise on Cortical Encoding of Speech in Autism Spectrum Disorders

    Science.gov (United States)

    Russo, Nicole; Zecker, Steven; Trommer, Barbara; Chen, Julia; Kraus, Nina

    2009-01-01

    This study provides new evidence of deficient auditory cortical processing of speech in noise in autism spectrum disorders (ASD). Speech-evoked responses (approximately 100-300 ms) in quiet and background noise were evaluated in typically-developing (TD) children and children with ASD. ASD responses showed delayed timing (both conditions) and…

  11. Coordination of head movements and speech in first encounter dialogues

    DEFF Research Database (Denmark)

    Paggio, Patrizia

    2015-01-01

    This paper presents an analysis of the temporal alignment be- tween head movements and associated speech segments in the NOMCO corpus of first encounter dialogues [1]. Our results show that head movements tend to start slightly before the onset of the corresponding speech sequence and to end...... slightly after, but also that there are delays in both directions in the range of -/+ 1s. Various factors that may influence delay duration are investigated. Correlations are found between delay length and the duration of the speech sequences associated with the head movements. Effects due to the different...

  12. Speech Motor Programming in Apraxia of Speech: Evidence from a Delayed Picture-Word Interference Task

    Science.gov (United States)

    Mailend, Marja-Liisa; Maas, Edwin

    2013-01-01

    Purpose: Apraxia of speech (AOS) is considered a speech motor programming impairment, but the specific nature of the impairment remains a matter of debate. This study investigated 2 hypotheses about the underlying impairment in AOS framed within the Directions Into Velocities of Articulators (DIVA; Guenther, Ghosh, & Tourville, 2006) model: The…

  13. Visual stimuli in intervention approaches for pre-schoolers diagnosed with phonological delay.

    Science.gov (United States)

    Pedro, Cassandra Ferreira; Lousada, Marisa; Hall, Andreia; Jesus, Luis M T

    2018-04-01

    The aim of this study was to develop and content validate specific speech and language intervention picture cards: The Letter-Sound (L&S) cards. The present study was also focused on assessing the influence of these cards on letter-sound correspondences and speech sound production. An expert panel of six speech and language therapists analysed and discussed the L&S cards based on several criteria previously established. A Speech and Language Therapist carried out a 6-week therapeutic intervention with a group of seven Portuguese phonologically delayed pre-schoolers aged 5;3 to 6;5. The modified Bland-Altman method revealed good agreement among evaluators, that is the majority of the values was between the agreement limits. Additional outcome measures were collected before and after the therapeutic intervention process. Results indicate that the L&S cards facilitate the acquisition of letter-sound correspondences. Regarding speech sound production, some improvements were also observed at word level. The L&S cards are therefore likely to give phonetic cues, which are crucial for the correct production of therapeutic targets. These visual cues seemed to have helped children with phonological delay develop the above-mentioned skills.

  14. Adapting to foreign-accented speech: The role of delay in testing

    NARCIS (Netherlands)

    Witteman, M.J.; Bardhan, N.P.; Weber, A.C.; McQueen, J.M.

    2011-01-01

    Understanding speech usually seems easy, but it can become noticeably harder when the speaker has a foreign accent. This is because foreign accents add considerable variation to speech. Research on foreign-accented speech shows that participants are able to adapt quickly to this type of variation.

  15. Recurrent Respiratory Papillomatosis Causing Chronic Stridor and Delayed Speech in an 18-Month-Old Boy

    Directory of Open Access Journals (Sweden)

    Adel Alharbi

    2006-01-01

    Full Text Available Recurrent respiratory papillomatosis is a relatively uncommon disease that presents clinically with symptoms ranging from hoarseness to severe dyspnea. Human papilloma virus types 6 and 11 are important in the etiology of papillomas and are most probably transmitted from mother to child during birth. Although spontaneous remission is frequent, pulmonary spread and/or malignant transformation resulting in death has been reported. CO2 laser evaporation of papillomas and adjuvant drug therapy using lymphoblastoid interferon-alpha are the most common treatments. However, several other treatments have been tried, with varying success. In the present report, a case of laryngeal papillomatosis presenting with chronic stridor and delayed speech is described.

  16. The effects of mands and models on the speech of unresponsive language-delayed preschool children.

    Science.gov (United States)

    Warren, S F; McQuarter, R J; Rogers-Warren, A K

    1984-02-01

    The effects of the systematic use of mands (non-yes/no questions and instructions to verbalize), models (imitative prompts), and specific consequent events on the productive verbal behavior of three unresponsive, socially isolate, language-delayed preschool children were investigated in a multiple-baseline design within a classroom free play period. Following a lengthy intervention condition, experimental procedures were systematically faded out to check for maintenance effects. The treatment resulted in increases in total verbalizations and nonobligatory speech (initiations) by the subjects. Subjects also became more responsive in obligatory speech situations. In a second free play (generalization) setting, increased rates of total child verbalizations and nonobligatory verbalizations were observed for all three subjects, and two of the three subjects were more responsive compared to their baselines in the first free play setting. Rate of total teacher verbalizations and questions were also higher in this setting. Maintenance of the treatment effects was shown during the fading condition in the intervention setting. The subjects' MLUs (mean length of utterance) increased during the intervention condition when the teacher began prompting a minimum of two-word utterances in response to a mand or model.

  17. Sex differences in multisensory speech processing in both typically developing children and those on the autism spectrum.

    Directory of Open Access Journals (Sweden)

    Lars A. Ross

    2015-05-01

    Full Text Available Background: Previous work has revealed sizeable deficits in the abilities of children with an autism spectrum disorder (ASD to integrate auditory and visual speech signals, with clear implications for social communication in this population. There is a strong male preponderance in ASD, with approximately four affected males for every female. The presence of sex differences in ASD symptoms suggests a sexual dimorphism in the ASD phenotype, and raises the question of whether this dimorphism extends to ASD traits in the neurotypical population. Here, we investigated possible sexual dimorphism in multisensory speech integration in both ASD and neurotypical individuals. Methods: We assessed whether males and females differed in their ability to benefit from visual speech when target words were presented under varying levels of signal-to-noise, in samples of neurotypical children and adults, and in children diagnosed with an ASD. Results: In typically developing (TD children and children with ASD, females (n= 47 and n=15 respectively were significantly superior in their ability to recognize words under audiovisual listening conditions compared to males (n= 55 and n=58 respectively. This sex difference was absent in our sample of neurotypical adults (n= 28 females; n= 28 males. Conclusions: We propose that the development of audiovisual integration is delayed in male relative to female children, a delay that is also observed in ASD. In neurotypicals, these sex differences disappear in early adulthood when females approach their performance maximum and males catch up. Our findings underline the importance of considering sex differences in the search for autism endophenotypes and strongly encourage increased efforts to study the underrepresented population of females within ASD.

  18. Swahili speech development: preliminary normative data from typically developing pre-school children in Tanzania.

    Science.gov (United States)

    Gangji, Nazneen; Pascoe, Michelle; Smouse, Mantoa

    2015-01-01

    Swahili is widely spoken in East Africa, but to date there are no culturally and linguistically appropriate materials available for speech-language therapists working in the region. The challenges are further exacerbated by the limited research available on the typical acquisition of Swahili phonology. To describe the speech development of 24 typically developing first language Swahili-speaking children between the ages of 3;0 and 5;11 years in Dar es Salaam, Tanzania. A cross-sectional design was used with six groups of four children in 6-month age bands. Single-word speech samples were obtained from each child using a set of culturally appropriate pictures designed to elicit all consonants and vowels of Swahili. Each child's speech was audio-recorded and phonetically transcribed using International Phonetic Alphabet (IPA) conventions. Children's speech development is described in terms of (1) phonetic inventory, (2) syllable structure inventory, (3) phonological processes and (4) percentage consonants correct (PCC) and percentage vowels correct (PVC). Results suggest a gradual progression in the acquisition of speech sounds and syllables between the ages of 3;0 and 5;11 years. Vowel acquisition was completed and most of the consonants acquired by age 3;0. Fricatives/z, s, h/ were later acquired at 4 years and /θ/and /r/ were the last acquired consonants at age 5;11. Older children were able to produce speech sounds more accurately and had fewer phonological processes in their speech than younger children. Common phonological processes included lateralization and sound preference substitutions. The study contributes a preliminary set of normative data on speech development of Swahili-speaking children. Findings are discussed in relation to theories of phonological development, and may be used as a basis for further normative studies with larger numbers of children and ultimately the development of a contextually relevant assessment of the phonology of Swahili

  19. Speech impairment in Down syndrome: a review.

    Science.gov (United States)

    Kent, Ray D; Vorperian, Houri K

    2013-02-01

    This review summarizes research on disorders of speech production in Down syndrome (DS) for the purposes of informing clinical services and guiding future research. Review of the literature was based on searches using MEDLINE, Google Scholar, PsycINFO, and HighWire Press, as well as consideration of reference lists in retrieved documents (including online sources). Search terms emphasized functions related to voice, articulation, phonology, prosody, fluency, and intelligibility. The following conclusions pertain to four major areas of review: voice, speech sounds, fluency and prosody, and intelligibility. The first major area is voice. Although a number of studies have reported on vocal abnormalities in DS, major questions remain about the nature and frequency of the phonatory disorder. Results of perceptual and acoustic studies have been mixed, making it difficult to draw firm conclusions or even to identify sensitive measures for future study. The second major area is speech sounds. Articulatory and phonological studies show that speech patterns in DS are a combination of delayed development and errors not seen in typical development. Delayed (i.e., developmental) and disordered (i.e., nondevelopmental) patterns are evident by the age of about 3 years, although DS-related abnormalities possibly appear earlier, even in infant babbling. The third major area is fluency and prosody. Stuttering and/or cluttering occur in DS at rates of 10%-45%, compared with about 1% in the general population. Research also points to significant disturbances in prosody. The fourth major area is intelligibility. Studies consistently show marked limitations in this area, but only recently has the research gone beyond simple rating scales.

  20. Music training for the development of speech segmentation.

    Science.gov (United States)

    François, Clément; Chobert, Julie; Besson, Mireille; Schön, Daniele

    2013-09-01

    The role of music training in fostering brain plasticity and developing high cognitive skills, notably linguistic abilities, is of great interest from both a scientific and a societal perspective. Here, we report results of a longitudinal study over 2 years using both behavioral and electrophysiological measures and a test-training-retest procedure to examine the influence of music training on speech segmentation in 8-year-old children. Children were pseudo-randomly assigned to either music or painting training and were tested on their ability to extract meaningless words from a continuous flow of nonsense syllables. While no between-group differences were found before training, both behavioral and electrophysiological measures showed improved speech segmentation skills across testing sessions for the music group only. These results show that music training directly causes facilitation in speech segmentation, thereby pointing to the importance of music for speech perception and more generally for children's language development. Finally these results have strong implications for promoting the development of music-based remediation strategies for children with language-based learning impairments.

  1. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Methods for eliciting, annotating, and analyzing databases for child speech development.

    Science.gov (United States)

    Beckman, Mary E; Plummer, Andrew R; Munson, Benjamin; Reidy, Patrick F

    2017-09-01

    Methods from automatic speech recognition (ASR), such as segmentation and forced alignment, have facilitated the rapid annotation and analysis of very large adult speech databases and databases of caregiver-infant interaction, enabling advances in speech science that were unimaginable just a few decades ago. This paper centers on two main problems that must be addressed in order to have analogous resources for developing and exploiting databases of young children's speech. The first problem is to understand and appreciate the differences between adult and child speech that cause ASR models developed for adult speech to fail when applied to child speech. These differences include the fact that children's vocal tracts are smaller than those of adult males and also changing rapidly in size and shape over the course of development, leading to between-talker variability across age groups that dwarfs the between-talker differences between adult men and women. Moreover, children do not achieve fully adult-like speech motor control until they are young adults, and their vocabularies and phonological proficiency are developing as well, leading to considerably more within-talker variability as well as more between-talker variability. The second problem then is to determine what annotation schemas and analysis techniques can most usefully capture relevant aspects of this variability. Indeed, standard acoustic characterizations applied to child speech reveal that adult-centered annotation schemas fail to capture phenomena such as the emergence of covert contrasts in children's developing phonological systems, while also revealing children's nonuniform progression toward community speech norms as they acquire the phonological systems of their native languages. Both problems point to the need for more basic research into the growth and development of the articulatory system (as well as of the lexicon and phonological system) that is oriented explicitly toward the construction of

  3. Impact of speech-generating devices on the language development of a child with childhood apraxia of speech: a case study.

    Science.gov (United States)

    Lüke, Carina

    2016-01-01

    The purpose of the study was to evaluate the effectiveness of speech-generating devices (SGDs) on the communication and language development of a 2-year-old boy with severe childhood apraxia of speech (CAS). An A-B design was used over a treatment period of 1 year, followed by three additional follow-up measurements, in order to evaluate the implementation of SGDs in the speech therapy of a 2;7-year-old boy with severe CAS. In total, 53 therapy sessions were videotaped and analyzed to better understand his communicative (operationalized as means of communication) and linguistic (operationalized as intelligibility and consistency of speech-productions, lexical and grammatical development) development. The trend-lines of baseline phase A and intervention phase B were compared and percentage of non-overlapping data points were calculated to verify the value of the intervention. The use of SGDs led to an immediate increase in the communicative development of the child. An increase in all linguistic variables was observed, with a latency effect of eight to nine treatment sessions. The implementation of SGDs in speech therapy has the potential to be highly effective in regards to both communicative and linguistic competencies in young children with severe CAS. Implications for Rehabilitation Childhood apraxia of speech (CAS) is a neurological speech sound disorder which results in significant deficits in speech production and lead to a higher risk for language, reading and spelling difficulties. Speech-generating devices (SGD), as one method of augmentative and alternative communication (AAC), can effectively enhance the communicative and linguistic development of children with severe CAS.

  4. A low-delay 8 Kb/s backward-adaptive CELP coder

    Science.gov (United States)

    Neumeyer, L. G.; Leblanc, W. P.; Mahmoud, S. A.

    1990-01-01

    Code excited linear prediction coding is an efficient technique for compressing speech sequences. Communications quality of speech can be obtained at bit rates below 8 Kb/s. However, relatively large coding delays are necessary to buffer the input speech in order to perform the LPC analysis. A low delay 8 Kb/s CELP coder is introduced in which the short term predictor is based on past synthesized speech. A new distortion measure that improves the tracking of the formant filter is discussed. Formal listening tests showed that the performance of the backward adaptive coder is almost as good as the conventional CELP coder.

  5. Audio-visual speech perception in infants and toddlers with Down syndrome, fragile X syndrome, and Williams syndrome.

    Science.gov (United States)

    D'Souza, Dean; D'Souza, Hana; Johnson, Mark H; Karmiloff-Smith, Annette

    2016-08-01

    Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants. We probed for the McGurk effect, which occurs when the auditory component of one sound (/ba/) is paired with the visual component of another sound (/ga/), leading to the perception of an illusory third sound (/da/ or /tha/). We measured AV integration in 95 infants/toddlers with Down, fragile X, or Williams syndrome, whom we matched on Chronological and Mental Age to 25 TD infants. We also assessed a more basic AV perceptual ability: sensitivity to matching vs. mismatching AV speech stimuli. Infants with Williams syndrome failed to demonstrate a McGurk effect, indicating poor AV speech integration. Moreover, while the TD children discriminated between matching and mismatching AV stimuli, none of the other groups did, hinting at a basic deficit or delay in AV speech processing, which is likely to constrain subsequent language development. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Internet Video Telephony Allows Speech Reading by Deaf Individuals and Improves Speech Perception by Cochlear Implant Users

    Science.gov (United States)

    Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D.; Senn, Pascal

    2013-01-01

    Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals. PMID:23359119

  7. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  8. Multiple Transcoding Impact on Speech Quality in Ideal Network Conditions

    Directory of Open Access Journals (Sweden)

    Martin Mikulec

    2015-01-01

    Full Text Available This paper deals with the impact of transcoding on the speech quality. We have focused mainly on the transcoding between codecs without the negative influence of the network parameters such as packet loss and delay. It has ensured objective and repeatable results from our measurement. The measurement was performed on the Transcoding Measuring System developed especially for this purpose. The system is based on the open source projects and is useful as a design tool for VoIP system administrators. The paper compares the most used codecs from the transcoding perspective. The multiple transcoding between G711, GSM and G729 codecs were performed and the speech quality of these calls was evaluated. The speech quality was measured by Perceptual Evaluation of Speech Quality method, which provides results in Mean Opinion Score used to describe the speech quality on a scale from 1 to 5. The obtained results indicate periodical speech quality degradation on every transcoding between two codecs.

  9. Low Delay Noise Reduction and Dereverberation for Hearing Aids

    Directory of Open Access Journals (Sweden)

    Heinrich W. Löllmann

    2009-01-01

    Full Text Available A new system for single-channel speech enhancement is proposed which achieves a joint suppression of late reverberant speech and background noise with a low signal delay and low computational complexity. It is based on a generalized spectral subtraction rule which depends on the variances of the late reverberant speech and background noise. The calculation of the spectral variances of the late reverberant speech requires an estimate of the reverberation time (RT which is accomplished by a maximum likelihood (ML approach. The enhancement with this blind RT estimation achieves almost the same speech quality as by using the actual RT. In comparison to commonly used post-filters in hearing aids which only perform a noise reduction, a significantly better objective and subjective speech quality is achieved. The proposed system performs time-domain filtering with coefficients adapted in the non-uniform (Bark-scaled frequency-domain. This allows to achieve a high speech quality with low signal delay which is important for speech enhancement in hearing aids or related applications such as hands-free communication systems.

  10. Effects of Feedback Frequency and Timing on Acquisition, Retention, and Transfer of Speech Skills in Acquired Apraxia of Speech

    Science.gov (United States)

    Hula, Shannon N. Austermann; Robin, Donald A.; Maas, Edwin; Ballard, Kirrie J.; Schmidt, Richard A.

    2008-01-01

    Purpose: Two studies examined speech skill learning in persons with apraxia of speech (AOS). Motor-learning research shows that delaying or reducing the frequency of feedback promotes retention and transfer of skills. By contrast, immediate or frequent feedback promotes temporary performance enhancement but interferes with retention and transfer.…

  11. EXTERNAL SPEECH AND ITS INFLUENCE ON FORMATION OF A CHILD¢S MENTALITY OF THE

    Directory of Open Access Journals (Sweden)

    E. V. Zhulina

    2017-01-01

    Full Text Available This article presents with the author's understanding of approaches to studying a child's external speech and its influence on formation of his or her mentality. The authors defin structure of this phenomenon, components and levels of those parts of mental activity whose development is directly dependent on the external speech. The authors show in theoretical aspect the structure of assimilation of oral speech includs three main subsystems of regulation (speech, emotional, communicative regulation which are based on resources of an individual and are tightly integrated, creating a specific pattern of regulation. Because of the structure of a child's mentality in some cases the delay of the external speech leads to affective and communicative violations that, in turn, negatively affect personal development. Without rendering early help, a deviation in development becomes more noticeable, affect all spheres of mentality, communication, social and psychological adaptation of f child in general.

  12. Robust signal selection for lineair prediction analysis of voiced speech

    NARCIS (Netherlands)

    Ma, C.; Kamp, Y.; Willems, L.F.

    1993-01-01

    This paper investigates a weighted LPC analysis of voiced speech. In view of the speech production model, the weighting function is either chosen to be the short-time energy function of the preemphasized speech sample sequence with certain delays or is obtained by thresholding the short-time energy

  13. A social feedback loop for speech development and its reduction in autism.

    Science.gov (United States)

    Warlaumont, Anne S; Richards, Jeffrey A; Gilkerson, Jill; Oller, D Kimbrough

    2014-07-01

    We analyzed the microstructure of child-adult interaction during naturalistic, daylong, automatically labeled audio recordings (13,836 hr total) of children (8- to 48-month-olds) with and without autism. We found that an adult was more likely to respond when the child's vocalization was speech related rather than not speech related. In turn, a child's vocalization was more likely to be speech related if the child's previous speech-related vocalization had received an immediate adult response rather than no response. Taken together, these results are consistent with the idea that there is a social feedback loop between child and caregiver that promotes speech development. Although this feedback loop applies in both typical development and autism, children with autism produced proportionally fewer speech-related vocalizations, and the responses they received were less contingent on whether their vocalizations were speech related. We argue that such differences will diminish the strength of the social feedback loop and have cascading effects on speech development over time. Differences related to socioeconomic status are also reported. © The Author(s) 2014.

  14. The development and validation of the speech quality instrument.

    Science.gov (United States)

    Chen, Stephanie Y; Griffin, Brianna M; Mancuso, Dean; Shiau, Stephanie; DiMattia, Michelle; Cellum, Ilana; Harvey Boyd, Kelly; Prevoteau, Charlotte; Kohlberg, Gavriel D; Spitzer, Jaclyn B; Lalwani, Anil K

    2017-12-08

    Although speech perception tests are available to evaluate hearing, there is no standardized validated tool to quantify speech quality. The objective of this study is to develop a validated tool to measure quality of speech heard. Prospective instrument validation study of 35 normal hearing adults recruited at a tertiary referral center. Participants listened to 44 speech clips of male/female voices reciting the Rainbow Passage. Speech clips included original and manipulated excerpts capturing goal qualities such as mechanical and garbled. Listeners rated clips on a 10-point visual analog scale (VAS) of 18 characteristics (e.g. cartoonish, garbled). Skewed distribution analysis identified mean ratings in the upper and lower 2-point limits of the VAS (ratings of 8-10, 0-2, respectively); items with inconsistent responses were eliminated. The test was pruned to a final instrument of nine speech clips that clearly define qualities of interest: speech-like, male/female, cartoonish, echo-y, garbled, tinny, mechanical, rough, breathy, soothing, hoarse, like, pleasant, natural. Mean ratings were highest for original female clips (8.8) and lowest for not-speech manipulation (2.1). Factor analysis identified two subsets of characteristics: internal consistency demonstrated Cronbach's alpha of 0.95 and 0.82 per subset. Test-retest reliability of total scores was high, with an intraclass correlation coefficient of 0.76. The Speech Quality Instrument (SQI) is a concise, valid tool for assessing speech quality as an indicator for hearing performance. SQI may be a valuable outcome measure for cochlear implant recipients who, despite achieving excellent speech perception, often experience poor speech quality. 2b. Laryngoscope, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  15. Individual development of preschool children-prevalences and determinants of delays in Germany: a cross-sectional study in Southern Bavaria

    Directory of Open Access Journals (Sweden)

    Stich Heribert L

    2012-12-01

    Full Text Available Abstract Background Even minor abnormalities of early child development may have dramatic long term consequences. Accurate prevalence rates for a range of developmental impairments have been difficult to establish. Since related studies have used different methodological approaches, direct comparisons of the prevalence of developmental delays are difficult. The understanding of the key factors affecting child development, especially in preschool aged children remains limited. We used data from school entry examinations in Bavaria to measure the prevalence of developmental impairments in pre-school children beginning primary school in 1997–2009. Methods The developmental impairments of all school beginners in the district of Dingolfing- Landau, Bavaria were assessed using modified “Bavarian School Entry Model” examination from 1997 to 2009 (N=13,182. The children were assessed for motor, cognitive, language and psychosocial impairments using a standardised medical protocol. Prevalence rates of impairments in twelve domains of development were estimated. Using uni- and multivariable logistic regression models, association between selected factors and development delays were assessed. Results The highest prevalence existed for impairments of pronunciation (13.8% followed by fine motor impairments (12.2%, and impairments of memory and concentration (11.3% and the lowest for impairments of rhythm of speech (3.1%. Younger children displayed more developmental delays. Male gender was strongly associated with all developmental impairments (highest risk for fine motor impairments = OR 3.22, 95% confidence interval 2.86-3.63. Preschool children with siblings (vs. children without any siblings were at higher risk of having impairments in pronunciation (OR 1.31, 1.14-1.50. The influence of the non-German nationality was strong, with a maximum risk increase for the subareas of grammar and psychosocial development. Although children with non

  16. Individual development of preschool children-prevalences and determinants of delays in Germany: a cross-sectional study in Southern Bavaria.

    Science.gov (United States)

    Stich, Heribert L; Baune, Bernhard Th; Caniato, Riccardo N; Mikolajczyk, Rafael T; Krämer, Alexander

    2012-12-05

    Even minor abnormalities of early child development may have dramatic long term consequences. Accurate prevalence rates for a range of developmental impairments have been difficult to establish. Since related studies have used different methodological approaches, direct comparisons of the prevalence of developmental delays are difficult. The understanding of the key factors affecting child development, especially in preschool aged children remains limited. We used data from school entry examinations in Bavaria to measure the prevalence of developmental impairments in pre-school children beginning primary school in 1997-2009. The developmental impairments of all school beginners in the district of Dingolfing-Landau, Bavaria were assessed using modified "Bavarian School Entry Model" examination from 1997 to 2009 (N=13,182). The children were assessed for motor, cognitive, language and psychosocial impairments using a standardised medical protocol. Prevalence rates of impairments in twelve domains of development were estimated. Using uni- and multivariable logistic regression models, association between selected factors and development delays were assessed. The highest prevalence existed for impairments of pronunciation (13.8%) followed by fine motor impairments (12.2%), and impairments of memory and concentration (11.3%) and the lowest for impairments of rhythm of speech (3.1%). Younger children displayed more developmental delays. Male gender was strongly associated with all developmental impairments (highest risk for fine motor impairments = OR 3.22, 95% confidence interval 2.86-3.63). Preschool children with siblings (vs. children without any siblings) were at higher risk of having impairments in pronunciation (OR 1.31, 1.14-1.50). The influence of the non-German nationality was strong, with a maximum risk increase for the subareas of grammar and psychosocial development. Although children with non-German nationality had a reduced risk of disorders for the rhythm

  17. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: IV. the Pause Marker Index

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: Three previous articles provided rationale, methods, and several forms of validity support for a diagnostic marker of childhood apraxia of speech (CAS), termed the pause marker (PM). Goals of the present article were to assess the validity and stability of the PM Index (PMI) to scale CAS severity. Method: PM scores and speech, prosody,…

  18. Comparison of the reliability of parental reporting and the direct test of the Thai Speech and Language Test.

    Science.gov (United States)

    Prathanee, Benjamas; Angsupakorn, Nipa; Pumnum, Tawitree; Seepuaham, Cholada; Jaiyong, Pechcharat

    2012-11-01

    To find reliability of parental or caregiver's report and testing of the Thai Speech and Language Test for Children Aged 0-4 Years Old. Five investigators assessed speech and language abilities from video both contexts: parental or caregivers' report and test forms of Thai Speech and Language Test for Children Aged 0-4 Years Old. Twenty-five normal and 30 children with delayed development or risk for delayed speech and language skills were assessed at age intervals of 3, 6, 9, 12, 15, 18, 24, 30, 36 and 48 months. Reliability of parental or caregivers' testing and reporting was at a moderate level (0.41-0.60). Inter-rater reliability among investigators was excellent (0.86-1.00). The parental or caregivers' report form of the Thai Speech and Language test for Children aged 0-4 years old was an indicator for success at a moderate level. Trained professionals could use both forms of this test as reliable tools at an excellent level.

  19. Prevalence of Speech Disorders in Arak Primary School Students, 2014-2015

    Directory of Open Access Journals (Sweden)

    Abdoreza Yavari

    2016-09-01

    Full Text Available Abstract Background: The speech disorders may produce irreparable damage to childs speech and language development in the psychosocial view. The voice, speech sound production and fluency disorders are speech disorders, that may result from delay or impairment in speech motor control mechanism, central neuron system disorders, improper language stimulation or voice abuse. Materials and Methods: This study examined the prevalence of speech disorders in 1393 Arakian students at 1 to 6th grades of primary school. After collecting continuous speech samples, picture description, passage reading and phonetic test, we recorded the pathological signs of stuttering, articulation disorder and voice disorders in a special sheet. Results: The prevalence of articulation, voice and stuttering disorders was 8%, 3.5% and%1 and the prevalence of speech disorders was 11.9%. The prevalence of speech disorders was decreasing with increasing of student’s grade. 12.2% of boy students and 11.7% of girl students of primary school in Arak had speech disorders. Conclusion: The prevalence of speech disorders of primary school students in Arak is similar to the prevalence of speech disorders in Kermanshah, but the prevalence of speech disorders in this research is smaller than many similar researches in Iran. It seems that racial and cultural diversity has some effect on increasing the prevalence of speech disorders in Arak city.

  20. Children with 7q11.23 Duplication Syndrome: Speech, Language, Cognitive, and Behavioral Characteristics and their Implications for Intervention

    OpenAIRE

    Velleman, Shelley L.; Mervis, Carolyn B.

    2011-01-01

    7q11.23 duplication syndrome is a recently-documented genetic disorder associated with severe speech delay, language delay, a characteristic facies, hypotonia, developmental delay, and social anxiety. Developmentally appropriate nonverbal pragmatic abilities are demonstrated in socially comfortable situations. Motor speech disorder (Childhood Apraxia of Speech and/or dysarthria), oral apraxia, and/or phonological disorder or symptoms of these disorders are common as are characteristics consis...

  1. Development of a Low-Cost, Noninvasive, Portable Visual Speech Recognition Program.

    Science.gov (United States)

    Kohlberg, Gavriel D; Gal, Ya'akov Kobi; Lalwani, Anil K

    2016-09-01

    Loss of speech following tracheostomy and laryngectomy severely limits communication to simple gestures and facial expressions that are largely ineffective. To facilitate communication in these patients, we seek to develop a low-cost, noninvasive, portable, and simple visual speech recognition program (VSRP) to convert articulatory facial movements into speech. A Microsoft Kinect-based VSRP was developed to capture spatial coordinates of lip movements and translate them into speech. The articulatory speech movements associated with 12 sentences were used to train an artificial neural network classifier. The accuracy of the classifier was then evaluated on a separate, previously unseen set of articulatory speech movements. The VSRP was successfully implemented and tested in 5 subjects. It achieved an accuracy rate of 77.2% (65.0%-87.6% for the 5 speakers) on a 12-sentence data set. The mean time to classify an individual sentence was 2.03 milliseconds (1.91-2.16). We have demonstrated the feasibility of a low-cost, noninvasive, portable VSRP based on Kinect to accurately predict speech from articulation movements in clinically trivial time. This VSRP could be used as a novel communication device for aphonic patients. © The Author(s) 2016.

  2. Development of a Danish speech intelligibility test

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Dau, Torsten

    2009-01-01

    Abstract A Danish speech intelligibility test for assessing the speech recognition threshold in noise (SRTN) has been developed. The test consists of 180 sentences distributed in 18 phonetically balanced lists. The sentences are based on an open word-set and represent everyday language. The sente....... The test was verified with 14 normal-hearing listeners; the overall SRTN lies at a signal-to-noise ratio of -3.15 dB with a standard deviation of 1.0 dB. The list-SRTNs deviate less than 0.5 dB from the overall mean....

  3. Differentiating Speech Delay from Disorder: Does It Matter?

    Science.gov (United States)

    Dodd, Barbara

    2011-01-01

    Aim: The cognitive-linguistic abilities of 2 subgroups of children with speech impairment were compared to better understand underlying deficits that might influence effective intervention. Methods: Two groups of 23 children, aged 3;3 to 5;6, performed executive function tasks assessing cognitive flexibility and nonverbal rule abstraction.…

  4. [Modeling developmental aspects of sensorimotor control of speech production].

    Science.gov (United States)

    Kröger, B J; Birkholz, P; Neuschaefer-Rube, C

    2007-05-01

    Detailed knowledge of the neurophysiology of speech acquisition is important for understanding the developmental aspects of speech perception and production and for understanding developmental disorders of speech perception and production. A computer implemented neural model of sensorimotor control of speech production was developed. The model is capable of demonstrating the neural functions of different cortical areas during speech production in detail. (i) Two sensory and two motor maps or neural representations and the appertaining neural mappings or projections establish the sensorimotor feedback control system. These maps and mappings are already formed and trained during the prelinguistic phase of speech acquisition. (ii) The feedforward sensorimotor control system comprises the lexical map (representations of sounds, syllables, and words of the first language) and the mappings from lexical to sensory and to motor maps. The training of the appertaining mappings form the linguistic phase of speech acquisition. (iii) Three prelinguistic learning phases--i. e. silent mouthing, quasi stationary vocalic articulation, and realisation of articulatory protogestures--can be defined on the basis of our simulation studies using the computational neural model. These learning phases can be associated with temporal phases of prelinguistic speech acquisition obtained from natural data. The neural model illuminates the detailed function of specific cortical areas during speech production. In particular it can be shown that developmental disorders of speech production may result from a delayed or incorrect process within one of the prelinguistic learning phases defined by the neural model.

  5. Atypical lateralization of ERP response to native and non-native speech in infants at risk for autism spectrum disorder.

    Science.gov (United States)

    Seery, Anne M; Vogel-Farley, Vanessa; Tager-Flusberg, Helen; Nelson, Charles A

    2013-07-01

    Language impairment is common in autism spectrum disorders (ASD) and is often accompanied by atypical neural lateralization. However, it is unclear when in development language impairment or atypical lateralization first emerges. To address these questions, we recorded event-related-potentials (ERPs) to native and non-native speech contrasts longitudinally in infants at risk for ASD (HRA) over the first year of life to determine whether atypical lateralization is present as an endophenotype early in development and whether these infants show delay in a very basic precursor of language acquisition: phonemic perceptual narrowing. ERP response for the HRA group to a non-native speech contrast revealed a trajectory of perceptual narrowing similar to a group of low-risk controls (LRC), suggesting that phonemic perceptual narrowing does not appear to be delayed in these high-risk infants. In contrast there were significant group differences in the development of lateralized ERP response to speech: between 6 and 12 months the LRC group displayed a lateralized response to the speech sounds, while the HRA group failed to display this pattern. We suggest the possibility that atypical lateralization to speech may be an ASD endophenotype over the first year of life. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. [Hearing capacity and speech production in 417 children with facial cleft abnormalities].

    Science.gov (United States)

    Schönweiler, R; Schönweiler, B; Schmelzeisen, R

    1994-11-01

    Children with cleft palates often suffer from chronic conductive hearing losses, delayed language acquisition and speech disorders. This study presents results of speech and language outcomes in relation to hearing function and types of palatal malformations found. 417 children with cleft palates were examined during followup evaluations that extended over several years. Disorders were studied as they affected the ears, nose and throat, audiometry and speech and language pathology. Children with isolated cleft lips were excluded. Among the total group, 8% had normal speech and language development while 92% had speech or language disorders. 80% of these latter children had hearing problems that predominantly consisted of fluctuating conductive hearing losses caused by otitis media with effusion. 5% had sensorineural hearing losses. Fifty-eight children (14%) with rhinolalia aperta were not improved by speech therapy and required velopharyngoplasties, using a cranial-based pharyngeal flap. Language skills did not depend on the type of cleft palate presents but on the frequency and amount of hearing loss found. Otomicroscopy and audiometric follow-ups with insertions of ventilation tubes were considered to be most important for language development in those children with repeated middle ear infections. Speech or language therapy was necessary in 49% of the children.

  7. THE MEANING OF THE PREVENTION WITH SPEECH THERAPY AS A IMPORTANT FAC-TOR FOR THE PROPER DEVELOPMENT OF THE CHILDREN SPEECH

    Directory of Open Access Journals (Sweden)

    S. FILIPOVA

    1999-11-01

    Full Text Available The paper presented some conscientious and results from the finished research which showing the meaning of the prevention with speech therapy in the development of the speech. The research was done at Negotino and with that are shown the most frequent speech deficiency of the children at preschool age.

  8. Assessment of Danish-speaking children’s phonological development and speech disorders

    DEFF Research Database (Denmark)

    Clausen, Marit Carolin; Fox-Boyer, Annette

    2018-01-01

    The identification of speech sounds disorders is an important everyday task for speech and language therapists (SLTs) working with children. Therefore, assessment tools are needed that are able to correctly identify and diagnose a child with a suspected speech disorder and furthermore, that provide...... of the existing speech assessments in Denmark showed that none of the materials fulfilled current recommendations identified in research literature. Therefore, the aim of this paper is to describe the evaluation of a newly constructed instrument for assessing the speech development and disorders of Danish...... with suspected speech disorder (Clausen and Fox-Boyer, in prep). The results indicated that the instrument showed strong inter-examiner reliability for both populations as well as a high content and diagnostic validity. Hence, the study showed that the LogoFoVa can be regarded as a reliable and valid tool...

  9. Near-toll quality digital speech transmission in the mobile satellite service

    Science.gov (United States)

    Townes, S. A.; Divsalar, D.

    1986-01-01

    This paper discusses system considerations for near-toll quality digital speech transmission in a 5 kHz mobile satellite system channel. Tradeoffs are shown for power performance versus delay for a 4800 bps speech compression system in conjunction with a 16 state rate 2/3 trellis coded 8PSK modulation system. The suggested system has an additional 150 ms of delay beyond the propagation delay and requires an E(b)/N(0) of about 7 dB for a Ricean channel assumption with line-of-sight to diffuse component ratio of 10 assuming ideal synchronization. An additional loss of 2 to 3 dB is expected for synchronization in fading environment.

  10. The development of visual speech perception in Mandarin Chinese-speaking children.

    Science.gov (United States)

    Chen, Liang; Lei, Jianghua

    2017-01-01

    The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.

  11. Look Who’s Talking NOW! Parentese Speech, Social Context, and Language Development Across Time

    Directory of Open Access Journals (Sweden)

    Nairán Ramírez-Esparza

    2017-06-01

    Full Text Available In previous studies, we found that the social interactions infants experience in their everyday lives at 11- and 14-months of age affect language ability at 24 months of age. These studies investigated relationships between the speech style (i.e., parentese speech vs. standard speech and social context [i.e., one-on-one (1:1 vs. group] of language input in infancy and later speech development (i.e., at 24 months of age, controlling for socioeconomic status (SES. Results showed that the amount of exposure to parentese speech-1:1 in infancy was related to productive vocabulary at 24 months. The general goal of the present study was to investigate changes in (1 the pattern of social interactions between caregivers and their children from infancy to childhood and (2 relationships among speech style, social context, and language learning across time. Our study sample consisted of 30 participants from the previously published infant studies, evaluated at 33 months of age. Social interactions were assessed at home using digital first-person perspective recordings of the auditory environment. We found that caregivers use less parentese speech-1:1, and more standard speech-1:1, as their children get older. Furthermore, we found that the effects of parentese speech-1:1 in infancy on later language development at 24 months persist at 33 months of age. Finally, we found that exposure to standard speech-1:1 in childhood was the only social interaction that related to concurrent word production/use. Mediation analyses showed that standard speech-1:1 in childhood fully mediated the effects of parentese speech-1:1 in infancy on language development in childhood, controlling for SES. This study demonstrates that engaging in one-on-one interactions in infancy and later in life has important implications for language development.

  12. Language Development, Delay and Intervention--The Views of Parents from Communities That Speech and Language Therapy Managers in England Consider to Be Under-Served

    Science.gov (United States)

    Marshall, Julie; Harding, Sam; Roulstone, Sue

    2017-01-01

    Background: Evidence-based practice includes research evidence, clinical expertise and stakeholder perspectives. Stakeholder perspectives are important and include parental ethno-theories, which embrace views about many aspects of speech, language and communication, language development, and interventions. The Developmental Niche Framework…

  13. International aspirations for speech-language pathologists' practice with multilingual children with speech sound disorders: development of a position paper.

    Science.gov (United States)

    McLeod, Sharynne; Verdon, Sarah; Bowen, Caroline

    2013-01-01

    A major challenge for the speech-language pathology profession in many cultures is to address the mismatch between the "linguistic homogeneity of the speech-language pathology profession and the linguistic diversity of its clientele" (Caesar & Kohler, 2007, p. 198). This paper outlines the development of the Multilingual Children with Speech Sound Disorders: Position Paper created to guide speech-language pathologists' (SLPs') facilitation of multilingual children's speech. An international expert panel was assembled comprising 57 researchers (SLPs, linguists, phoneticians, and speech scientists) with knowledge about multilingual children's speech, or children with speech sound disorders. Combined, they had worked in 33 countries and used 26 languages in professional practice. Fourteen panel members met for a one-day workshop to identify key points for inclusion in the position paper. Subsequently, 42 additional panel members participated online to contribute to drafts of the position paper. A thematic analysis was undertaken of the major areas of discussion using two data sources: (a) face-to-face workshop transcript (133 pages) and (b) online discussion artifacts (104 pages). Finally, a moderator with international expertise in working with children with speech sound disorders facilitated the incorporation of the panel's recommendations. The following themes were identified: definitions, scope, framework, evidence, challenges, practices, and consideration of a multilingual audience. The resulting position paper contains guidelines for providing services to multilingual children with speech sound disorders (http://www.csu.edu.au/research/multilingual-speech/position-paper). The paper is structured using the International Classification of Functioning, Disability and Health: Children and Youth Version (World Health Organization, 2007) and incorporates recommendations for (a) children and families, (b) SLPs' assessment and intervention, (c) SLPs' professional

  14. Phonological Awareness Intervention for Children with Childhood Apraxia of Speech

    Science.gov (United States)

    Moriarty, Brigid C.; Gillon, Gail T.

    2006-01-01

    Aims: To investigate the effectiveness of an integrated phonological awareness intervention to improve the speech production, phonological awareness and printed word decoding skills for three children with childhood apraxia of speech (CAS) aged 7;3, 6;3 and 6;10. The three children presented with severely delayed phonological awareness skills…

  15. Synchronized brain activity during rehearsal and short-term memory disruption by irrelevant speech is affected by recall mode.

    Science.gov (United States)

    Kopp, Franziska; Schröger, Erich; Lipka, Sigrid

    2006-08-01

    EEG coherence as a measure of synchronization of brain activity was used to investigate effects of irrelevant speech. In a delayed serial recall paradigm 21 healthy participants retained verbal items over a 10-s delay with and without interfering irrelevant speech. Recall after the delay was varied in two modes (spoken vs. written). Behavioral data showed the classic irrelevant speech effect and a superiority of written over spoken recall mode. Coherence, however, was more sensitive to processing characteristics and showed interactions between the irrelevant speech effect and recall mode during the rehearsal delay in theta (4-7.5 Hz), alpha (8-12 Hz), beta (13-20 Hz), and gamma (35-47 Hz) frequency bands. For gamma, a rehearsal-related decrease of the duration of high coherence due to presentation of irrelevant speech was found in a left-lateralized fronto-central and centro-temporal network only in spoken but not in written recall. In theta, coherence at predominantly fronto-parietal electrode combinations was indicative for memory demands and varied with individual working memory capacity assessed by digit span. Alpha coherence revealed similar results and patterns as theta coherence. In beta, a left-hemispheric network showed longer high synchronizations due to irrelevant speech only in written recall mode. EEG results suggest that mode of recall is critical for processing already during the retention period of a delayed serial recall task. Moreover, the finding that different networks are engaged with different recall modes shows that the disrupting effect of irrelevant speech is not a unitary mechanism.

  16. Mexican immigrant mothers' perceptions of their children's communication disabilities, emergent literacy development, and speech-language therapy program.

    Science.gov (United States)

    Kummerer, Sharon E; Lopez-Reyna, Norma A; Hughes, Marie Tejero

    2007-08-01

    This qualitative study explored mothers' perceptions of their children's communication disabilities, emergent literacy development, and speech-language therapy programs. Participants were 14 Mexican immigrant mothers and their children (age 17-47 months) who were receiving center-based services from an early childhood intervention program, located in a large urban city in the Midwestern United States. Mother interviews composed the primary source of data. A secondary source of data included children's therapy files and log notes. Following the analysis of interviews through the constant comparative method, grounded theory was generated. The majority of mothers perceived their children as exhibiting a communication delay. Causal attributions were diverse and generally medical in nature (i.e., ear infections, seizures) or due to familial factors (i.e., family history and heredity, lack of extended family). Overall, mothers seemed more focused on their children's speech intelligibility and/or expressive language in comparison to emergent literacy abilities. To promote culturally responsive intervention, mothers recommended that professionals speak Spanish, provide information about the therapy process, and use existing techniques with Mexican immigrant families.

  17. Medical and biologic factors of speech and language development in children (part 2

    Directory of Open Access Journals (Sweden)

    Chernov D.N.

    2015-03-01

    Full Text Available The recent data shows that medico-biological aspects of the study of speech and language development in children should be expanded to include an analysis of various socio-cultural factors as the problem requires an interdisciplinary approach. The review stresses the necessity of methodological approach to study of bio- socio-cultural conditions of emerging speech and language abilities in ontogenesis. Psycho-pedagogical aspect involves: informing parents about the medical and biological aspects of speech and language development in childhood; the active involvement of parents in the remedial and preventive activities carried out by specialists; activities to improve the quality and quantity of child-parent interaction depending on the severity and nature of deviations in child speech and language development.

  18. Oral motor functions, speech and communication before a definitive diagnosis of amyotrophic lateral sclerosis.

    Science.gov (United States)

    Makkonen, Tanja; Korpijaakko-Huuhka, Anna-Maija; Ruottinen, Hanna; Puhto, Riitta; Hollo, Kirsi; Ylinen, Aarne; Palmio, Johanna

    2016-01-01

    The aim of this study was to explore the cranial nerve symptoms, speech disorders and communicative effectiveness of Finnish patients with diagnosed or possible amyotrophic lateral sclerosis (ALS) at their first assessment by a speech-language pathologist. The group studied consisted of 30 participants who had clinical signs of bulbar deterioration at the beginning of the study. They underwent a thorough clinical speech and communication examination. The cranial nerve symptoms and ability to communicate were compared in 14 participants with probable or definitive ALS and in 16 participants with suspected or possible ALS. The initial type of ALS was also assessed. More deterioration in soft palate function was found in participants with possible ALS than with diagnosed ALS. Likewise, a slower speech rate combined with more severe dysarthria was observed in possible ALS. In both groups, there was some deterioration in communicative effectiveness. In the possible ALS group the diagnostic delay was longer and speech therapy intervention actualized later. The participants with ALS showed multidimensional decline in communication at their first visit to the speech-language pathologist, but impairments and activity limitations were more severe in suspected or possible ALS. The majority of persons with bulbar-onset ALS in this study were in the latter diagnostic group. This suggests that they are more susceptible to delayed diagnosis and delayed speech therapy assessment. It is important to start speech therapy intervention during the diagnostic processes particularly if the person already shows bulbar symptoms. Copyright © 2016. Published by Elsevier Inc.

  19. A t(5;16) translocation is the likely driver of a syndrome with ambiguous genitalia, facial dysmorphism, intellectual disability, and speech delay.

    Science.gov (United States)

    Ozantürk, Ayşegül; Davis, Erica E; Sabo, Aniko; Weiss, Marjan M; Muzny, Donna; Dugan-Perez, Shannon; Sistermans, Erik A; Gibbs, Richard A; Özgül, Köksal R; Yalnızoglu, Dilek; Serdaroglu, Esra; Dursun, Ali; Katsanis, Nicholas

    2016-03-01

    Genetic studies grounded on monogenic paradigms have accelerated both gene discovery and molecular diagnosis. At the same time, complex genomic rearrangements are also appreciated as potent drivers of disease pathology. Here, we report two male siblings with a dysmorphic face, ambiguous genitalia, intellectual disability, and speech delay. Through quad-based whole-exome sequencing and concomitant molecular cytogenetic testing, we identified two copy-number variants (CNVs) in both affected individuals likely arising from a balanced translocation: a 13.5-Mb duplication on Chromosome 16 (16q23.1 → 16qter) and a 7.7-Mb deletion on Chromosome 5 (5p15.31 → 5pter), as well as a hemizygous missense variant in CXorf36 (also known as DIA1R). The 5p terminal deletion has been associated previously with speech delay, whereas craniofacial dysmorphia and genital/urinary anomalies have been reported in patients with a terminal duplication of 16q. However, dosage changes in either genomic region alone could not account for the overall clinical presentation in our family; functional testing of CXorf36 in zebrafish did not induce defects in neurogenesis or the craniofacial skeleton. Notably, literature and database analysis revealed a similar dosage disruption in two siblings with extensive phenotypic overlap with our patients. Taken together, our data suggest that dosage perturbation of genes within the two chromosomal regions likely drives the syndromic manifestations of our patients and highlight how multiple genetic lesions can contribute to complex clinical pathologies.

  20. On the Evaluation of the Conversational Speech Quality in Telecommunications

    Directory of Open Access Journals (Sweden)

    Vincent Barriac

    2008-04-01

    Full Text Available We propose an objective method to assess speech quality in the conversational context by taking into account the talking and listening speech qualities and the impact of delay. This approach is applied to the results of four subjective tests on the effects of echo, delay, packet loss, and noise. The dataset is divided into training and validation sets. For the training set, a multiple linear regression is applied to determine a relationship between conversational, talking, and listening speech qualities and the delay value. The multiple linear regression leads to an accurate estimation of the conversational scores with high correlation and low error between subjective and estimated scores, both on the training and validation sets. In addition, a validation is performed on the data of a subjective test found in the literature which confirms the reliability of the regression. The relationship is then applied to an objective level by replacing talking and listening subjective scores with talking and listening objective scores provided by existing objective models, fed by speech signals recorded during the subjective tests. The conversational model achieves high performance as revealed by comparison with the test results and with the existing standard methodology “E-model,” presented in the ITU-T (International Telecommunication Union Recommendation G.107.

  1. Monkey Lipsmacking Develops Like the Human Speech Rhythm

    Science.gov (United States)

    Morrill, Ryan J.; Paukner, Annika; Ferrari, Pier F.; Ghazanfar, Asif A.

    2012-01-01

    Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved "de novo" in humans. An alternative account--the one we explored here--is that the rhythm of speech evolved through the modification of rhythmic facial…

  2. Comparing the Effects of Speech-Generating Device Display Organization on Symbol Comprehension and Use by Three Children With Developmental Delays.

    Science.gov (United States)

    Barton-Hulsey, Andrea; Wegner, Jane; Brady, Nancy C; Bunce, Betty H; Sevcik, Rose A

    2017-05-17

    Three children ages 3;6 to 5;3 with developmental and language delays were provided experience with a traditional grid-based display and a contextually organized visual scene display on a speech-generating device to illustrate considerations for practice and future research in augmentative and alternative communication assessment and intervention. Twelve symbols were taught in a grid display and visual scene display using aided input during dramatic play routines. Teaching sessions were 30 minutes a day, 5 days a week for 3 weeks. Symbol comprehension and use was assessed pre and post 3 weeks of experience. Comprehension of symbol vocabulary on both displays increased after 3 weeks of experience. Participants 1 and 2 used both displays largely for initiation. Participant 3 had limited expressive use of either display. The methods used in this study demonstrate one way to inform individual differences in learning and preference for speech-generating device displays when making clinical decisions regarding augmentative and alternative communication supports for a child and their family. Future research should systematically examine the role of extant comprehension, symbol experience, functional communication needs, and the role of vocabulary type in the learning and use of grid displays versus visual scene displays.

  3. Time delays, population, and economic development

    Science.gov (United States)

    Gori, Luca; Guerrini, Luca; Sodini, Mauro

    2018-05-01

    This research develops an augmented Solow model with population dynamics and time delays. The model produces either a single stationary state or multiple stationary states (able to characterise different development regimes). The existence of time delays may cause persistent fluctuations in both economic and demographic variables. In addition, the work identifies in a simple way the reasons why economics affects demographics and vice versa.

  4. Comparative Study of Features of Social Intelligence and Speech Behavior of Children of Primary School Age with Impaired Mental Function

    Directory of Open Access Journals (Sweden)

    Shcherban D.

    2018-04-01

    Full Text Available The article discusses the concept of social intelligence and its characteristics in children of primary school age with impaired mental functions. The concept and main features, including speech, are discussed, delays of mental development, the importance of detained development for social intelligence and speech behavior are also considered. Also, the concept of speech behavior is analyzed, the author defines the phenomenon, describes its specific features, which are distinguish its structure, and consist of six components: verbal, emotional, motivational, ethical (moral, prognostic, semantic (cognitive. Particular attention is paid to the position of social intelligence in the structure of speech behavior of children of primary school age with a impaired mental functions. Indicators of social intelligence were analyzed from the point of view of speech behavior of children with different rates of mental development and compared with its components at a qualitative level. The study used both author's and well-known techniques.

  5. Hemispheric speech lateralisation in the developing brain is related to motor praxis ability

    Directory of Open Access Journals (Sweden)

    Jessica C. Hodgson

    2016-12-01

    Full Text Available Commonly displayed functional asymmetries such as hand dominance and hemispheric speech lateralisation are well researched in adults. However there is debate about when such functions become lateralised in the typically developing brain. This study examined whether patterns of speech laterality and hand dominance were related and whether they varied with age in typically developing children. 148 children aged 3–10 years performed an electronic pegboard task to determine hand dominance; a subset of 38 of these children also underwent functional Transcranial Doppler (fTCD imaging to derive a lateralisation index (LI for hemispheric activation during speech production using an animation description paradigm. There was no main effect of age in the speech laterality scores, however, younger children showed a greater difference in performance between their hands on the motor task. Furthermore, this between-hand performance difference significantly interacted with direction of speech laterality, with a smaller between-hand difference relating to increased left hemisphere activation. This data shows that both handedness and speech lateralisation appear relatively determined by age 3, but that atypical cerebral lateralisation is linked to greater performance differences in hand skill, irrespective of age. Results are discussed in terms of the common neural systems underpinning handedness and speech lateralisation.

  6. "It's the Way You Talk to Them." The Child's Environment: Early Years Practitioners' Perceptions of Its Influence on Speech and Language Development, Its Assessment and Environment Targeted Interventions

    Science.gov (United States)

    Marshall, Julie; Lewis, Elizabeth

    2014-01-01

    Speech and language delay occurs in approximately 6% of the child population, and interventions to support this group of children focus on the child and/or the communicative environment. Evidence about the effectiveness of interventions that focus on the environment as well as the (reported) practices of speech and language therapists (SLTs) and…

  7. Speech and Language Development after Infant Tracheostomy.

    Science.gov (United States)

    Hill, Betsy P.; Singer, Lynn T.

    1990-01-01

    When assessed for speech/language development, 31 children (age 1-12) fitted with endotracheal tubes for more than 3 months beginning by age 13 months showed overall language functioning within normal limits and commensurate with cognitive ability. However, a pattern of expressive language disability was noted in the oldest group. (Author/JDD)

  8. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  9. Permanent molars: Delayed development and eruption

    Directory of Open Access Journals (Sweden)

    Arathi R

    2006-05-01

    Full Text Available Delayed development and eruption of all the permanent molars is a rare phenomenon, which can cause disturbance in the developing occlusion. The eruption of permanent first and second molars is very important for the coordination of facial growth and for providing sufficient occlusal support for undisturbed mastication. In the case described, the first permanent molars were delayed in their development and were seen erupting at the age of nine and a half years. Severe disparity between the left and the right side of the dentition with respect to the rate of development of molars were also present.

  10. Phonological Awareness and Early Reading Development in Childhood Apraxia of Speech (CAS)

    Science.gov (United States)

    McNeill, B. C.; Gillon, G. T.; Dodd, B.

    2009-01-01

    Background: Childhood apraxia of speech (CAS) is associated with phonological awareness, reading, and spelling deficits. Comparing literacy skills in CAS with other developmental speech disorders is critical for understanding the complexity of the disorder. Aims: This study compared the phonological awareness and reading development of children…

  11. An experimental Dutch keyboard-to-speech system for the speech impaired

    NARCIS (Netherlands)

    Deliege, R.J.H.

    1989-01-01

    An experimental Dutch keyboard-to-speech system has been developed to explor the possibilities and limitations of Dutch speech synthesis in a communication aid for the speech impaired. The system uses diphones and a formant synthesizer chip for speech synthesis. Input to the system is in

  12. Time Delay Estimation Algoritms for Echo Cancellation

    Directory of Open Access Journals (Sweden)

    Kirill Sakhnov

    2011-01-01

    Full Text Available The following case study describes how to eliminate echo in a VoIP network using delay estimation algorithms. It is known that echo with long transmission delays becomes more noticeable to users. Thus, time delay estimation, as a part of echo cancellation, is an important topic during transmission of voice signals over packetswitching telecommunication systems. An echo delay problem associated with IP-based transport networks is discussed in the following text. The paper introduces the comparative study of time delay estimation algorithm, used for estimation of the true time delay between two speech signals. Experimental results of MATLab simulations that describe the performance of several methods based on cross-correlation, normalized crosscorrelation and generalized cross-correlation are also presented in the paper.

  13. Development and preliminary evaluation of a pediatric Spanish-English speech perception task.

    Science.gov (United States)

    Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J

    2014-06-01

    The purpose of this study was to develop a task to evaluate children's English and Spanish speech perception abilities in either noise or competing speech maskers. Eight bilingual Spanish-English and 8 age-matched monolingual English children (ages 4.9-16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish-English talkers. The target stimuli were 30 disyllabic English and Spanish words, familiar to 5-year-olds and easily illustrated. Competing stimuli included either 2-talker English or 2-talker Spanish speech (corresponding to target language) and spectrally matched noise. For both groups of children, regardless of test language, performance was significantly worse for the 2-talker than for the noise masker condition. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Results indicated that the stimuli and task were appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use.

  14. The development of multisensory speech perception continues into the late childhood years.

    Science.gov (United States)

    Ross, Lars A; Molholm, Sophie; Blanco, Daniella; Gomez-Ramirez, Manuel; Saint-Amour, Dave; Foxe, John J

    2011-06-01

    Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd. No claim to original US government works.

  15. The neural basis of speech sound discrimination from infancy to adulthood

    OpenAIRE

    Partanen, Eino

    2013-01-01

    Rapid processing of speech is facilitated by neural representations of native language phonemes. However, some disorders and developmental conditions, such as developmental dyslexia, can hamper the development of these neural memory traces, leading to language delays and poor academic achievement. While the early identification of such deficits is paramount so that interventions can be started as early as possible, there is currently no systematically used ecologically valid paradigm for the ...

  16. Assessing recall in mothers' retrospective reports: concerns over children's speech and language development.

    Science.gov (United States)

    Russell, Ginny; Miller, Laura L; Ford, Tamsin; Golding, Jean

    2014-01-01

    Retrospective recall about children's symptoms is used to establish early developmental patterns in clinical practice and is also utilised in child psychopathology research. Some studies have indicated that the accuracy of retrospective recall is influenced by life events. Our hypothesis was that an intervention: speech and language therapy, would adversely affect the accuracy of parent recall of early concerns about their child's speech and language development. Mothers (n = 5,390) reported on their child's speech development (child male to female ratio = 50:50) when their children were aged 18 or 30 months, and also reported on these early concerns retrospectively, 10 years later, when their children were 13 years old. Overall reliability of retrospective recall was good, 86 % of respondents accurately recalling their earlier concerns. As hypothesised, however, the speech and language intervention was strongly associated with inaccurate retrospective recall about concerns in the early years (Relative Risk Ratio = 19.03; 95 % CI:14.78-24.48). Attendance at speech therapy was associated with increased recall of concerns that were not reported at the time. The study suggests caution is required when interpreting retrospective reports of abnormal child development as recall may be influenced by intervening events.

  17. THE ROLE OF THE SPEECH THERAPIST AND HIS INFLUENCE IN SPEECH DEVELOPMENT OF CHILDREN WITH CENTRAL DEFECTS AND INSTRUCTIVE AND ADVISORY WORK OF THE PARENT

    Directory of Open Access Journals (Sweden)

    Violeta TORTEVSKA

    1997-06-01

    Full Text Available The modern way of living in which the communication becomes a basic and upbringing factor and regulator of the relations isolates children with hard individual, family, educative and social problems.The speech and language disorders are the most remarkable symptoms pointing out the complex of defects in the communicative activities, reduced cognitive functions and cerebral dysfunction's.The modern conception in the rehabilitation field leads to a full engagement of the children’s closest environment and especially parents.The study will include the work of the speech therapist with children with a diagnosis tardy speech development (alalia and developing dysphasia in the hearing, speech and voice rehabilitation institute-Skopje, and its role introducing the parents for their right access and the systematic conduction of the rehabilitation proceedings-especially stimulating the motors and speech development.The speech therapist’s task is to find out a way and to apply means by which the children with central damages could build their speech and lingual system and to help the parents through instructive and advisory work into the comprehension of the phases and stages of that system.The conclusion is that the proceedings of the early treatment with the children with central damages are naturally caused by the difference of their early supplementation. The suggestions that are referring to what should be substituted, how much it should be substituted and how it should be done leads to the frames of the early therapeutical access.

  18. Early Speech Motor Development: Cognitive and Linguistic Considerations

    Science.gov (United States)

    Nip, Ignatius S. B.; Green, Jordan R.; Marx, David B.

    2009-01-01

    This longitudinal investigation examines developmental changes in orofacial movements occurring during the early stages of communication development. The goals were to identify developmental trends in early speech motor performance and to determine how these trends differ across orofacial behaviors thought to vary in cognitive and linguistic…

  19. Fifty years of progress in speech coding standards

    Science.gov (United States)

    Cox, Richard

    2004-10-01

    Over the past 50 years, speech coding has taken root worldwide. Early applications were for the military and transmission for telephone networks. The military gave equal priority to intelligibility and low bit rate. The telephone network gave priority to high quality and low delay. These illustrate three of the four areas in which requirements must be set for any speech coder application: bit rate, quality, delay, and complexity. While the military could afford relatively expensive terminal equipment for secure communications, the telephone network needed low cost for massive deployment in switches and transmission equipment worldwide. Today speech coders are at the heart of the wireless phones and telephone answering systems we use every day. In addition to the technology and technical invention that has occurred, standards make it possible for all these different systems to interoperate. The primary areas of standardization are the public switched telephone network, wireless telephony, and secure telephony for government and military applications. With the advent of IP telephony there are additional standardization efforts and challenges. In this talk the progress in all areas is reviewed as well as a reflection on Jim Flanagan's impact on this field during the past half century.

  20. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  1. Assessing the Effectiveness of Parent-Child Interaction Therapy with Language Delayed Children: A Clinical Investigation

    Science.gov (United States)

    Falkus, Gila; Tilley, Ciara; Thomas, Catherine; Hockey, Hannah; Kennedy, Anna; Arnold, Tina; Thorburn, Blair; Jones, Katie; Patel, Bhavika; Pimenta, Claire; Shah, Rena; Tweedie, Fiona; O'Brien, Felicity; Leahy, Ruth; Pring, Tim

    2016-01-01

    Parent-child interaction therapy (PCIT) is widely used by speech and language therapists to improve the interactions between children with delayed language development and their parents/carers. Despite favourable reports of the therapy from clinicians, little evidence of its effectiveness is available. We investigated the effects of PCIT as…

  2. Development of delayed radiation necrosis

    International Nuclear Information System (INIS)

    Ohara, ShigFeki; Takagi, Terumasa; Shibata, Taichiro; Nagai, Hajime.

    1983-01-01

    The authors discussed the developing process of delayed radiation necrosis of the brain from the case of a 42-year-old female who developed intracranial hypertension and left hemiparesis 5 and a half years after radiotherapy for pituitary adenoma. The initial sign of radiation necrosis was from a CT scan taken 3 and a half years after radiotherapy showing an irregular low density lesion in the right temporal lobe. CT scan 2 years later demonstrated displacement of the midline structures to the left and a larger low density lesion with partially high density in the right MCA territory that was enhanced with intravenous contrast medium. Recovery after a right temporal lobectomy and administration of steroid hormone were uneventful. Eight months later there were no signs of raised intracranial pressure nor of neurological deficits. Tissues obtained from the right temporal lobe at lobectomy revealed the characteristic changes of delayed radiation necrosis; a mixture of fresh, recent, and old vascular lesions in the same specimen. From these findings, it was speculated that delayed radiation necrosis might initially occur within several years after radiotherapy and might gradually take a progressive and extended course, even in cases whose clinical symptoms develop much later. (author)

  3. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  4. Non-intrusive speech quality assessment in simplified e-model

    OpenAIRE

    Vozňák, Miroslav

    2012-01-01

    The E-model brings a modern approach to the computation of estimated quality, allowing for easy implementation. One of its advantages is that it can be applied in real time. The method is based on a mathematical computation model evaluating transmission path impairments influencing speech signal, especially delays and packet losses. These parameters, common in an IP network, can affect speech quality dramatically. The paper deals with a proposal for a simplified E-model and its pr...

  5. Cingulo-opercular activity affects incidental memory encoding for speech in noise.

    Science.gov (United States)

    Vaden, Kenneth I; Teubner-Rhodes, Susan; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A

    2017-08-15

    Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions. Copyright © 2017

  6. Hypothalamic digoxin and hemispheric chemical dominance: relation to speech and language dysfunction.

    Science.gov (United States)

    Kurup, Ravi Kumar; Kurup, Parameswara Achutha

    2003-06-01

    The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. Since endogenous digoxin can regulate neurotransmitter transport and dolichols can modulate glycoconjugate synthesis important in synaptic connectivity, the pathway was assessed in patients with dyslexia, delayed recovery from global aphasia consequent to a dominant hemispheric thrombotic infarct, and developmental delay of speech milestone. The pathway was also studied in right hemispheric, left hemispheric, and bihemispheric dominance to find out the role of hemispheric dominance in the pathogenesis of speech disorders. The plasma/serum--activity of HMG CoA reductase, magnesium, digoxin, dolichol, ubiquinone--and tryptophan/tyrosine catabolic patterns, as well as RBC (Na+)-K+ ATPase activity, were measured in the above mentioned groups. The glycoconjugate metabolism and membrane composition was also studied. The study showed that in dyslexia, developmental delay of speech milestone, and delayed recovery from global aphasia there was an upregulated isoprenoidal pathway with increased digoxin and dolichol levels. The membrane (Na+)-K+ ATPase activity, serum magnesium and ubiquinone levels were low. The tryptophan catabolites were increased and the tyrosine catabolites including dopamine decreased in the serum contributing to a speech dysfunction. There was an increase in carbohydrate residues of glycoproteins, glycosaminoglycans, and glycolipids levels as well as an increased activity of GAG degrading enzymes and glyco hydrolases in the serum. The cholesterol:phospholipid ratio of RBC membrane increased and membrane glycoconjugates showed a decrease. All of these could contribute to altered synaptic inactivity in these disorders. The patterns correlated with those obtained in right hemispheric chemical dominance. Right hemispheric chemical dominance may play a role in the genesis of these disorders. Hemispheric chemical dominance has no correlation with handedness

  7. Telephone based speech interfaces in the developing world, from the perspective of human-human communication

    CSIR Research Space (South Africa)

    Naidoo, S

    2005-07-01

    Full Text Available recently, before computers systems were able to synthesize or recognize speech, speech was a capability unique to humans. The human brain has developed to differentiate between human speech and other audio occurrences. Therefore, the slowly- evolving... human brain reacts in certain ways to voice stimuli, and has certain expectations regarding communication by voice. Nass affirms that the human brain operates using the same mechanisms when interacting with speech interfaces as when conversing...

  8. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...-Speech Services for Individuals with Hearing and Speech Disabilities, Report and Order (Order), document...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...

  9. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  10. Atypical Speech and Language Development: A Consensus Study on Clinical Signs in the Netherlands

    Science.gov (United States)

    Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.

    2017-01-01

    Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…

  11. A Danish open-set speech corpus for competing-speech studies

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Dau, Torsten; Neher, Tobias

    2014-01-01

    Studies investigating speech-on-speech masking effects commonly use closed-set speech materials such as the coordinate response measure [Bolia et al. (2000). J. Acoust. Soc. Am. 107, 1065-1066]. However, these studies typically result in very low (i.e., negative) speech recognition thresholds (SRTs......) when the competing speech signals are spatially separated. To achieve higher SRTs that correspond more closely to natural communication situations, an open-set, low-context, multi-talker speech corpus was developed. Three sets of 268 unique Danish sentences were created, and each set was recorded...... with one of three professional female talkers. The intelligibility of each sentence in the presence of speech-shaped noise was measured. For each talker, 200 approximately equally intelligible sentences were then selected and systematically distributed into 10 test lists. Test list homogeneity was assessed...

  12. Individual differences in delay discounting under acute stress: the role of trait perceived stress

    Directory of Open Access Journals (Sweden)

    Karolina M. Lempert

    2012-07-01

    Full Text Available Delay discounting refers to the reduction of the value of a future reward as the delay to that reward increases. The rate at which individuals discount future rewards varies as a function of both individual and contextual differences, and high delay discounting rates have been linked with problematic behaviors, including drug abuse and gambling. The current study investigated the effects of acute anticipatory stress on delay discounting, while considering two important factors: individual perceptions of stress and whether the stressful situation is future-focused or present-focused. Half of the participants experienced acute stress by anticipating giving a videotaped speech. This stress was either future-oriented (speech about future job or present-oriented (speech about physical appearance. They then performed a delay discounting task, in which they chose between smaller, immediate rewards and larger, delayed rewards. Their scores on the Perceived Stress Scale were also collected. The way in which one appraises a stressful situation interacts with acute stress to influence choices; under stressful conditions, delay discounting rate was highest in individuals with low perceived stress and lowest for individuals with high perceived stress. This result might be related to individual variation in reward responsiveness under stress. Furthermore, the time orientation of the task interacted with its stressfulness to affect the individual’s propensity to choose immediate rewards. These findings add to our understanding of the intermediary factors between stress and decision making.

  13. Speech, "Inner Speech," and the Development of Short-Term Memory: Effects of Picture-Labeling on Recall.

    Science.gov (United States)

    Hitch, Graham J.; And Others

    1991-01-01

    Reports on experiments to determine effects of overt speech on children's use of inner speech in short-term memory. Word length and phonemic similarity had greater effects on older children and when pictures were labeled at presentation. Suggests that speaking or listening to speech activates an internal articulatory loop. (Author/GH)

  14. Does Bilingualism Delay the Development of Dementia?

    OpenAIRE

    Amy L Atkinson

    2016-01-01

    It has been suggested that bilingualism (where individuals speak two languages) may delay the development of dementia. However, much of the research is inconclusive. Some researchers have reported that bilingualism delays the onset and diagnosis of dementia, whilst other studies have found weak or even detrimental effects. This paper reviews a series of nine empirical studies, published up until March 2016, which investigated whether bilingualism significantly delays the onset of dementia. Th...

  15. Who Receives Speech/Language Services by 5 Years of Age in the United States?

    Science.gov (United States)

    Hammer, Carol Scheffner; Farkas, George; Hillemeier, Marianne M.; Maczuga, Steve; Cook, Michael; Morano, Stephanie

    2016-01-01

    Purpose We sought to identify factors predictive of or associated with receipt of speech/language services during early childhood. We did so by analyzing data from the Early Childhood Longitudinal Study–Birth Cohort (ECLS-B; Andreassen & Fletcher, 2005), a nationally representative data set maintained by the U.S. Department of Education. We addressed two research questions of particular importance to speech-language pathology practice and policy. First, do early vocabulary delays increase children's likelihood of receiving speech/language services? Second, are minority children systematically less likely to receive these services than otherwise similar White children? Method Multivariate logistic regression analyses were performed for a population-based sample of 9,600 children and families participating in the ECLS-B. Results Expressive vocabulary delays by 24 months of age were strongly associated with and predictive of children's receipt of speech/language services at 24, 48, and 60 months of age (adjusted odds ratio range = 4.32–16.60). Black children were less likely to receive speech/language services than otherwise similar White children at 24, 48, and 60 months of age (adjusted odds ratio range = 0.42–0.55). Lower socioeconomic status children and those whose parental primary language was other than English were also less likely to receive services. Being born with very low birth weight also significantly increased children's receipt of services at 24, 48, and 60 months of age. Conclusion Expressive vocabulary delays at 24 months of age increase children’s risk for later speech/language services. Increased use of culturally and linguistically sensitive practices may help racial/ethnic minority children access needed services. PMID:26579989

  16. Evaluation of speech recognizers for use in advanced combat helicopter crew station research and development

    Science.gov (United States)

    Simpson, Carol A.

    1990-01-01

    The U.S. Army Crew Station Research and Development Facility uses vintage 1984 speech recognizers. An evaluation was performed of newer off-the-shelf speech recognition devices to determine whether newer technology performance and capabilities are substantially better than that of the Army's current speech recognizers. The Phonetic Discrimination (PD-100) Test was used to compare recognizer performance in two ambient noise conditions: quiet office and helicopter noise. Test tokens were spoken by males and females and in isolated-word and connected-work mode. Better overall recognition accuracy was obtained from the newer recognizers. Recognizer capabilities needed to support the development of human factors design requirements for speech command systems in advanced combat helicopters are listed.

  17. Maternal and paternal pragmatic speech directed to young children with Down syndrome and typical development.

    Science.gov (United States)

    de Falco, Simona; Venuti, Paola; Esposito, Gianluca; Bornstein, Marc H

    2011-02-01

    The aim of this study was to compare functional features of maternal and paternal speech directed to children with Down syndrome and developmental age-matched typically developing children. Altogether 88 parents (44 mothers and 44 fathers) and their 44 young children (22 children with Down syndrome and 22 typically developing children) participated. Parents' speech directed to children was obtained through observation of naturalistic parent-child dyadic interactions. Verbatim transcripts of maternal and paternal language were categorized in terms of the primary function of each speech unit. Parents (both mothers and fathers) of children with Down syndrome used more affect-salient speech compared to parents of typically developing children. Although parents used the same amounts of information-salient speech, parents of children with Down syndrome used more direct statements and asked fewer questions than did parents of typically developing children. Concerning parent gender, in both groups mothers used more language than fathers and specifically more descriptions. These findings held controlling for child age and MLU and family SES. This study highlights strengths and weaknesses of parental communication to children with Down syndrome and helps to identify areas of potential improvement through intervention. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. The phonological development of Danish-speaking children

    DEFF Research Database (Denmark)

    Clausen, Marit Carolin; Fox-Boyer, Annette

    2017-01-01

    . This was accomplished through a cross-sectional study, i.e. by collecting normative data on types and age of occurrence of children’s phonological processes as well as on the acquisition of phones and clusters. 443 Danish-speaking children aged 2;6-4;11 years from all regions of Denmark were assessed using a picture......Detailed knowledge of children’s speech development is of great importance for speech and language therapists since it provides a baseline for the evaluation of whether a child shows typical, delayed or deviant speech development. As previous studies have shown that differences are seen...... in the speech development across languages, language-specific data are of great importance in order to understand how the phonological system of the ambient language affects the children’s speech acquisition. To date, little is known about the typical speech development in Danish-speaking children...

  19. Development of a System for Automatic Recognition of Speech

    Directory of Open Access Journals (Sweden)

    Roman Jarina

    2003-01-01

    Full Text Available The article gives a review of a research on processing and automatic recognition of speech signals (ARR at the Department of Telecommunications of the Faculty of Electrical Engineering, University of iilina. On-going research is oriented to speech parametrization using 2-dimensional cepstral analysis, and to an application of HMMs and neural networks for speech recognition in Slovak language. The article summarizes achieved results and outlines future orientation of our research in automatic speech recognition.

  20. The value of delay in tidal energy development

    International Nuclear Information System (INIS)

    MacDougall, Shelley L.

    2015-01-01

    Despite robust research, prototype development and demonstration of in-stream tidal energy devices, progress to the commercialization stage has been slow. Some of this can be attributed to a lack readiness or financing. However, when uncertainty is high, a developer may choose to delay a project until more is known. The option to delay has value for a company. This study applies the real option valuation model to an investment in a 10 MW array of in-stream tidal energy conversion devices at the Fundy Ocean Research Centre for Energy (FORCE) in the Bay of Fundy, Nova Scotia, Canada. The values of investing and the option to delay are calculated. A sensitivity analysis of key drivers and scenarios with various input values to the option model are constructed to observe the impact on the 'invest versus delay' decision. The analysis suggests there is value in owning the option to develop, by leasing a FORCE berth, but waiting while uncertainty is resolved. Implications for policy-setting are discussed. - Highlights: • Analyze an invest-vs-delay decision in tidal energy conversion using real options. • Assess whether conditions are conducive to an economically rational decision to delay. • Identify aspects of the decision that can be influenced by government policy.

  1. Polysyllable Speech Accuracy and Predictors of Later Literacy Development in Preschool Children with Speech Sound Disorders

    Science.gov (United States)

    Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen

    2017-01-01

    Purpose: The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables--words of three or more syllables--are important to consider because unlike…

  2. Profiles of verbal working memory growth predict speech and language development in children with cochlear implants.

    Science.gov (United States)

    Kronenberger, William G; Pisoni, David B; Harris, Michael S; Hoen, Helena M; Xu, Huiping; Miyamoto, Richard T

    2013-06-01

    Verbal short-term memory (STM) and working memory (WM) skills predict speech and language outcomes in children with cochlear implants (CIs) even after conventional demographic, device, and medical factors are taken into account. However, prior research has focused on single end point outcomes as opposed to the longitudinal process of development of verbal STM/WM and speech-language skills. In this study, the authors investigated relations between profiles of verbal STM/WM development and speech-language development over time. Profiles of verbal STM/WM development were identified through the use of group-based trajectory analysis of repeated digit span measures over at least a 2-year time period in a sample of 66 children (ages 6-16 years) with CIs. Subjects also completed repeated assessments of speech and language skills during the same time period. Clusters representing different patterns of development of verbal STM (digit span forward scores) were related to the growth rate of vocabulary and language comprehension skills over time. Clusters representing different patterns of development of verbal WM (digit span backward scores) were related to the growth rate of vocabulary and spoken word recognition skills over time. Different patterns of development of verbal STM/WM capacity predict the dynamic process of development of speech and language skills in this clinical population.

  3. Audiovisual speech perception development at varying levels of perceptual processing.

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  4. The Comorbidity between Attention-Deficit/Hyperactivity Disorder (ADHD) in Children and Arabic Speech Sound Disorder

    Science.gov (United States)

    Hariri, Ruaa Osama

    2016-01-01

    Children with Attention-Deficiency/Hyperactive Disorder (ADHD) often have co-existing learning disabilities and developmental weaknesses or delays in some areas including speech (Rief, 2005). Seeing that phonological disorders include articulation errors and other forms of speech disorders, studies pertaining to children with ADHD symptoms who…

  5. Speech Motor Development in Childhood Apraxia of Speech : Generating Testable Hypotheses by Neurocomputational Modeling

    NARCIS (Netherlands)

    Terband, H.; Maassen, B.

    2010-01-01

    Childhood apraxia of speech (CAS) is a highly controversial clinical entity, with respect to both clinical signs and underlying neuromotor deficit. In the current paper, we advocate a modeling approach in which a computational neural model of speech acquisition and production is utilized in order to

  6. Speech motor development in childhood apraxia of speech: generating testable hypotheses by neurocomputational modeling.

    NARCIS (Netherlands)

    Terband, H.R.; Maassen, B.A.M.

    2010-01-01

    Childhood apraxia of speech (CAS) is a highly controversial clinical entity, with respect to both clinical signs and underlying neuromotor deficit. In the current paper, we advocate a modeling approach in which a computational neural model of speech acquisition and production is utilized in order to

  7. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  8. The Development of the Mealings, Demuth, Dillon, and Buchholz Classroom Speech Perception Test

    Science.gov (United States)

    Mealings, Kiri T.; Demuth, Katherine; Buchholz, Jörg; Dillon, Harvey

    2015-01-01

    Purpose: Open-plan classroom styles are increasingly being adopted in Australia despite evidence that their high intrusive noise levels adversely affect learning. The aim of this study was to develop a new Australian speech perception task (the Mealings, Demuth, Dillon, and Buchholz Classroom Speech Perception Test) and use it in an open-plan…

  9. Rural and remote speech-language pathology service inequities: An Australian human rights dilemma.

    Science.gov (United States)

    Jones, Debra M; McAllister, Lindy; Lyle, David M

    2018-02-01

    Access to healthcare is a fundamental human right for all Australians. Article 19 of the Universal Declaration of Human Rights acknowledges the right to freedom of opinion and to seek, receive and impart information and ideas. Capacities for self-expression and effective communication underpin the realisation of these fundamental human rights. For rural and remote Australian children this realisation is compromised by complex disadvantages and inequities that contribute to communication delays, inequity of access to essential speech-language pathology services and poorer later life outcomes. Localised solutions to the provision of civically engaged, accessible, acceptable and sustainable speech-language pathology services within rural and remote Australian contexts are required if we are to make substantive human rights gains. However, civically engaged and sustained healthcare can significantly challenge traditional professionalised perspectives on how best to design and implement speech-language pathology services that seek to address rural and remote communication needs and access inequities. A failure to engage these communities in the identification of childhood communication delays and solutions to address these delays, ultimately denies children, families and communities of their human rights for healthcare access, self-expression, self-dignity and meaningful inclusion within Australian society.

  10. Development of Trivia Game for speech understanding in background noise.

    Science.gov (United States)

    Schwartz, Kathryn; Ringleb, Stacie I; Sandberg, Hilary; Raymer, Anastasia; Watson, Ginger S

    2015-01-01

    Listening in noise is an everyday activity and poses a challenge for many people. To improve the ability to understand speech in noise, a computerized auditory rehabilitation game was developed. In Trivia Game players are challenged to answer trivia questions spoken aloud. As players progress through the game, the level of background noise increases. A study using Trivia Game was conducted as a proof-of-concept investigation in healthy participants. College students with normal hearing were randomly assigned to a control (n = 13) or a treatment (n = 14) group. Treatment participants played Trivia Game 12 times over a 4-week period. All participants completed objective (auditory-only and audiovisual formats) and subjective listening in noise measures at baseline and 4 weeks later. There were no statistical differences between the groups at baseline. At post-test, the treatment group significantly improved their overall speech understanding in noise in the audiovisual condition and reported significant benefits in their functional listening abilities. Playing Trivia Game improved speech understanding in noise in healthy listeners. Significant findings for the audiovisual condition suggest that participants improved face-reading abilities. Trivia Game may be a platform for investigating changes in speech understanding in individuals with sensory, linguistic and cognitive impairments.

  11. VOCAL DEVELOPMENT AS A MAIN CONDITION IN EARLY SPEECH AND LANGUAGE ACQUISITION

    Directory of Open Access Journals (Sweden)

    Marianne HOLM

    2005-06-01

    Full Text Available The objective of this research is the evident positive vocal development in pre-lingual deaf children, who underwent a Cochlea Implantation in early age. The presented research compares the vocal speech expressions of three hearing impaired children and two children with normal hearing from 10 months to 5 years. Comparisons of the spontaneous vocal expressions were conducted by sonagraphic analyses. The awareness of the own voice as well as the voices of others is essential for the child’s continuous vocal development from crying to speech. Supra-segmental factors, such as rhythm, dynamics and melody play a very important role in this development.

  12. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    Science.gov (United States)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  13. Speech Development

    Science.gov (United States)

    ... Find Local Cleft/Craniofacial Specialists Booklets & Factsheets College Scholarships School-Age Support Resources Connections Conference View More… ... be delayed during the early years. Articulation problems (difficulties in making certain sounds) may persist in some ...

  14. Delayed cerebral development in twins with congenital hyperthyroidism.

    Science.gov (United States)

    Kopelman, A E

    1983-09-01

    Twins had congenital hyperthyroidism and delayed cerebral development manifested as ventriculomegaly, increased space in the interhemispheric fissure, and an exaggerated gyral pattern on cranial computed tomographic scans. At 3 1/2 years of age, both children had delayed development. Fetal and neonatal hyperthyroidism may interfere with normal brain growth and maturation with both neuranatomic and developmental sequelae.

  15. The treatment of apraxia of speech : Speech and music therapy, an innovative joint effort

    NARCIS (Netherlands)

    Hurkmans, Josephus Johannes Stephanus

    2016-01-01

    Apraxia of Speech (AoS) is a neurogenic speech disorder. A wide variety of behavioural methods have been developed to treat AoS. Various therapy programmes use musical elements to improve speech production. A unique therapy programme combining elements of speech therapy and music therapy is called

  16. TongueToSpeech (TTS): Wearable wireless assistive device for augmented speech.

    Science.gov (United States)

    Marjanovic, Nicholas; Piccinini, Giacomo; Kerr, Kevin; Esmailbeigi, Hananeh

    2017-07-01

    Speech is an important aspect of human communication; individuals with speech impairment are unable to communicate vocally in real time. Our team has developed the TongueToSpeech (TTS) device with the goal of augmenting speech communication for the vocally impaired. The proposed device is a wearable wireless assistive device that incorporates a capacitive touch keyboard interface embedded inside a discrete retainer. This device connects to a computer, tablet or a smartphone via Bluetooth connection. The developed TTS application converts text typed by the tongue into audible speech. Our studies have concluded that an 8-contact point configuration between the tongue and the TTS device would yield the best user precision and speed performance. On average using the TTS device inside the oral cavity takes 2.5 times longer than the pointer finger using a T9 (Text on 9 keys) keyboard configuration to type the same phrase. In conclusion, we have developed a discrete noninvasive wearable device that allows the vocally impaired individuals to communicate in real time.

  17. The speech perception skills of children with and without speech sound disorder.

    Science.gov (United States)

    Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie

    To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  19. Advocate: A Distributed Architecture for Speech-to-Speech Translation

    Science.gov (United States)

    2009-01-01

    tecture, are either wrapped natural-language processing ( NLP ) components or objects developed from scratch using the architecture’s API. GATE is...framework, we put together a demonstration Arabic -to- English speech translation system using both internally developed ( Arabic speech recognition and MT...conditions of our Arabic S2S demonstration system described earlier. Once again, the data size was varied and eighty identical requests were

  20. Developing speech resources from parliamentary data for South African english

    CSIR Research Space (South Africa)

    De Wet, Febe

    2016-05-01

    Full Text Available Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Developing Speech Resources from Parliamentary Data for South African English Febe de Wet*, Jaco Badenhorst, Thipe Modipa Human...

  1. Expressive-Emotional Sides of the Development of The Preschool Child Speech by Means Onto Psychological Music Therapy

    OpenAIRE

    Volzhentseva Iryna

    2017-01-01

    ABSTRACT In this article the problem of expressive-emotional sides of preschool child’s speech components development is considered by means of ontomusic therapy. Due to the theoretical analysis of psycho physiological theories, which methodologically substantiated the development of emotional and expressive sides of children’s speech by means of active music therapy and the interaction of speech and music as the related, mutually influencing at each other sign and semiotic kinds of activ...

  2. THE BASIS FOR SPEECH PREVENTION

    Directory of Open Access Journals (Sweden)

    Jordan JORDANOVSKI

    1997-06-01

    Full Text Available The speech is a tool for accurate communication of ideas. When we talk about speech prevention as a practical realization of the language, we are referring to the fact that it should be comprised of the elements of the criteria as viewed from the perspective of the standards. This criteria, in the broad sense of the word, presupposes an exact realization of the thought expressed between the speaker and the recipient.The absence of this criterion catches the eye through the practical realization of the language and brings forth consequences, often hidden very deeply in the human psyche. Their outer manifestation already represents a delayed reaction of the social environment. The foundation for overcoming and standardization of this phenomenon must be the anatomy-physiological patterns of the body, accomplished through methods in concordance with the nature of the body.

  3. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  4. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  5. Does Bilingualism Delay the Development of Dementia?

    Directory of Open Access Journals (Sweden)

    Amy L Atkinson

    2016-08-01

    Full Text Available It has been suggested that bilingualism (where individuals speak two languages may delay the development of dementia. However, much of the research is inconclusive. Some researchers have reported that bilingualism delays the onset and diagnosis of dementia, whilst other studies have found weak or even detrimental effects. This paper reviews a series of nine empirical studies, published up until March 2016, which investigated whether bilingualism significantly delays the onset of dementia. The article also explores whether the inconsistent findings can be attributed to differences in study designs or the definitions of bilingualism used between studies. Based on current evidence, it appears that lifelong bilingualism, where individuals frequently use both languages, may be protective against dementia. However, becoming bilingual in adulthood or using the second language infrequently is unlikely to substantially delay onset of the disease.

  6. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  7. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  8. The influence of hearing aids on the speech and language development of children with hearing loss.

    Science.gov (United States)

    Tomblin, J Bruce; Oleson, Jacob J; Ambrose, Sophie E; Walker, Elizabeth; Moeller, Mary Pat

    2014-05-01

    IMPORTANCE Hearing loss (HL) in children can be deleterious to their speech and language development. The standard of practice has been early provision of hearing aids (HAs) to moderate these effects; however, there have been few empirical studies evaluating the effectiveness of this practice on speech and language development among children with mild-to-severe HL. OBJECTIVE To investigate the contributions of aided hearing and duration of HA use to speech and language outcomes in children with mild-to-severe HL. DESIGN, SETTING, AND PARTICIPANTS An observational cross-sectional design was used to examine the association of aided hearing levels and length of HA use with levels of speech and language outcomes. One hundred eighty 3- and 5-year-old children with HL were recruited through records of Universal Newborn Hearing Screening and referrals from clinical service providers in the general community in 6 US states. INTERVENTIONS All but 4 children had been fitted with HAs, and measures of aided hearing and the duration of HA use were obtained. MAIN OUTCOMES AND MEASURES Standardized measures of speech and language ability were obtained. RESULTS Measures of the gain in hearing ability for speech provided by the HA were significantly correlated with levels of speech (ρ179 = 0.20; P = .008) and language: ρ155 = 0.21; P = .01) ability. These correlations were indicative of modest levels of association between aided hearing and speech and language outcomes. These benefits were found for children with mild and moderate-to-severe HL. In addition, the amount of benefit from aided hearing interacted with the duration of HA experience (Speech: F4,161 = 4.98; P < .001; Language: F4,138 = 2.91; P < .02). Longer duration of HA experience was most beneficial for children who had the best aided hearing. CONCLUSIONS AND RELEVANCE The degree of improved hearing provided by HAs was associated with better speech and language development in children

  9. Cultural-historical and cognitive approaches to understanding the origins of development of written speech

    Directory of Open Access Journals (Sweden)

    L.F. Obukhova

    2014-08-01

    Full Text Available We present an analysis of the emergence and development of written speech, its relationship to the oral speech, connections to the symbolic and modeling activities of preschool children – playing and drawing. While a child's drawing is traditionally interpreted in psychology either as a measure of intellectual development, or as a projective technique, or as a criterion for creative giftedness of the child, in this article, the artistic activity is analyzed as a prerequisite for development of written speech. The article substantiates the hypothesis that the mastery of “picture writing” – the ability to display the verbal content in a schematic picturesque plan – is connected to the success of writing speech at school age. Along with the classical works of L.S. Vygotsky, D.B. Elkonin, A.R. Luria, dedicated to finding the origins of writing, the article presents the current Russian and foreign frameworks of forming the preconditions of writing, based on the concepts of cultural-historical theory (“higher mental functions”, “zone of proximal development”, etc.. In Western psychology, a number of pilot studies used the developmental function of drawing for teaching the written skills to children of 5-7 years old. However, in cognitive psychology, relationship between drawing and writing is most often reduced mainly to the analysis of general motor circuits. Despite the recovery in research on writing and its origins in the last decade, either in domestic or in foreign psychology, the written speech is not a sufficiently studied problem.

  10. Development of language, Mathematics and self-independence abilities of a five-year-old with speech delay using educational toys

    Directory of Open Access Journals (Sweden)

    Geertruida Maya

    2017-12-01

    Full Text Available The objective of this research was to identify the development of language and Mathematics abilities as well as self-independence of a five years old child with speech disorder. The study was conducted in eight weeks period in which playing with educational toys had been the main activity. This is a descriptive qualitative research in which the data were collected using direct interviews, checklist instruments and observations. Based on intensive observations using the four point Likert scale, there were medium increases in the observed variables. For the language ability the average score increased from 1.5 to 2.9 with the N-gain of 0,56, for the Mathematics ability the average score increased from 1.7 to 3.0 with the N-gain of 0,57 and for the self-independence the average score increased from 2.1 to 3.0 with N-gain of 0,47. A longitudinal study on the child for one or two years is needed to arrive at more meaningful and conclusive findings.

  11. Maternal and paternal pragmatic speech directed to young children with Down syndrome and typical development

    OpenAIRE

    de Falco, Simona; Venuti, Paola; Esposito, Gianluca; Bornstein, Marc H.

    2011-01-01

    The aim of this study was to compare functional features of maternal and paternal speech directed to children with Down syndrome and developmental age-matched typically developing children. Altogether 88 parents (44 mothers and 44 fathers) and their 44 young children (22 children with Down syndrome and 22 typically developing children) participated. Parents’ speech directed to children was obtained through observation of naturalistic parent–child dyadic interactions. Verbatim transcripts of m...

  12. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  13. Medico-biological factors of speech and language development in young children (part 1

    Directory of Open Access Journals (Sweden)

    Chernov D.N.

    2015-03-01

    Full Text Available The article analyzed the main results of medico-biological directions in the study of the factors of children's speech and language. It shows that a variety of pre-, peri-and neonatal developmental factors (teratogenic effects, prematurity, low birth weight, maternal diseases during pregnancy, and chronic diseases of the child had a negative impact on the child-parent relationship that has a lasting influence on child speech and language development.

  14. Development of delayed radiation necrosis. Case report

    Energy Technology Data Exchange (ETDEWEB)

    Ohara, ShigFeki; Takagi, Terumasa [Meitetsu Hospital, Nagoya (Japan); Shibata, Taichiro; Nagai, Hajime

    1983-04-01

    The authors discussed the developing process of delayed radiation necrosis of the brain from the case of a 42-year-old female who developed intracranial hypertension and left hemiparesis 5 and a half years after radiotherapy for pituitary adenoma. The initial sign of radiation necrosis was from a CT scan taken 3 and a half years after radiotherapy showing an irregular low density lesion in the right temporal lobe. CT scan 2 years later demonstrated displacement of the midline structures to the left and a larger low density lesion with partially high density in the right MCA territory that was enhanced with intravenous contrast medium. Recovery after a right temporal lobectomy and administration of steroid hormone were uneventful. Eight months later there were no signs of raised intracranial pressure nor of neurological deficits. Tissues obtained from the right temporal lobe at lobectomy revealed the characteristic changes of delayed radiation necrosis; a mixture of fresh, recent, and old vascular lesions in the same specimen. From these findings, it was speculated that delayed radiation necrosis might initially occur within several years after radiotherapy and might gradually take a progressive and extended course, even in cases whose clinical symptoms develop much later.

  15. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  16. Delays in early neuropsychic development: Approaches to diagnosis

    Directory of Open Access Journals (Sweden)

    N. N. Zavadenko

    2015-01-01

    Full Text Available The population frequency of neuropsychic developmental delays in infants is estimated at nearly 10%; that of global intellectual disability (mental retardation is at 1-3%. Delayed development is denned as a substantial retardation as compared to the standard indicators in any of the basic spheres: motor, communicative, cognitive, adaptive-behavioral, and socioemotional ones. Global developmental delay is characterized by a significant lag in two or more spheres. The use of current diagnostic techniques, such as the Bayley or Griffiths scales, can provide an objective quantitative assessment of both an infant's overall development and indicators in individual spheres. At the preliminary examination stage, it is expedient to carry out a Denver developmental screening test that may be directly used in a doctor's consulting room. The causes of global developmental delay/intellectual disability in infants may be perinatal central nervous system (CNS lesions; brain malformations; intrauterine infections; intrauterine intoxications; early-onset psychoneurological diseases (neuroinfections, CNS injuries, epilepsies, autism spectrum disorders, etc.; congenital hypothyroidism; genetic diseases. Among all genetic causes of global developmental delay/intellectual disability, there are chromosomal anomalies (25-30%, monogenic diseases (metabolic diseases, neuroectodermal syndromes, diseases with predominant grey and white matter involvement. The diagnostic possibilities of current genetic methods are considered.

  17. Television viewing associates with delayed language development.

    Science.gov (United States)

    Chonchaiya, Weerasak; Pruksananonda, Chandhita

    2008-07-01

    To identify impact of television viewing on language development. The case-control study included 56 new patients with language delay and 110 normal children, aged 15-48 months. Language delay was diagnosed by reviewing language milestones and Denver-II. Television viewing variables and child/parental characteristics between both groups were interviewed. The data were analyzed by ANOVA and chi-square test. Adjusted odds ratios and 95% confidence intervals were calculated from multivariate logistic regression model. Forty-six boys and 10 girls; mean [+/-SD] age, 2.11+/-0.47 years of the case group and 59 boys and 51 girls; mean [+/-SD] age, 2.23+/-0.80 years of the control group were enrolled. Children who had language delay usually started watching television earlier at age 7.22+/-5.52 months vs. 11.92+/-5.86 months, p-valuetelevision than normal children (3.05+/-1.90 h/day vs. 1.85+/-1.18 h/day; p-valuetelevision attelevision>2 h/day were approximately six times more likely to have language delays. There is a relationship between early onset and high frequency of TV viewing and language delay.

  18. Persistent Language Delay Versus Late Language Emergence in Children With Early Cochlear Implantation

    Science.gov (United States)

    Nicholas, Johanna; Tobey, Emily; Davidson, Lisa

    2016-01-01

    Purpose The purpose of the present investigation is to differentiate children using cochlear implants (CIs) who did or did not achieve age-appropriate language scores by midelementary grades and to identify risk factors for persistent language delay following early cochlear implantation. Materials and Method Children receiving unilateral CIs at young ages (12–38 months) were tested longitudinally and classified with normal language emergence (n = 19), late language emergence (n = 22), or persistent language delay (n = 19) on the basis of their test scores at 4.5 and 10.5 years of age. Relative effects of demographic, audiological, linguistic, and academic characteristics on language emergence were determined. Results Age at CI was associated with normal language emergence but did not differentiate late emergence from persistent delay. Children with persistent delay were more likely to use left-ear implants and older speech processor technology. They experienced higher aided thresholds and lower speech perception scores. Persistent delay was foreshadowed by low morphosyntactic and phonological diversity in preschool. Logistic regression analysis predicted normal language emergence with 84% accuracy and persistent language delay with 74% accuracy. Conclusion CI characteristics had a strong effect on persistent versus resolving language delay, suggesting that right-ear (or bilateral) devices, technology upgrades, and improved audibility may positively influence long-term language outcomes. PMID:26501740

  19. The speech-based envelope power spectrum model (sEPSM) family: Development, achievements, and current challenges

    DEFF Research Database (Denmark)

    Relano-Iborra, Helia; Chabot-Leclerc, Alexandre; Scheidiger, Christoph

    2017-01-01

    have extended the predictive power of the original model to a broad range of conditions. This contribution presents the most recent developments within the sEPSM “family:” (i) A binaural extension, the B-sEPSM [Chabot-Leclerc et al. (2016). J. Acoust. Soc. Am. 140(1), 192-205] which combines better......Intelligibility models provide insights regarding the effects of target speech characteristics, transmission channels and/or auditory processing on the speech perception performance of listeners. In 2011, Jørgensen and Dau proposed the speech-based envelope power spectrum model [sEPSM, Jørgensen...

  20. Delayed Development of Pneumothorax After Pulmonary Radiofrequency Ablation

    International Nuclear Information System (INIS)

    Clasen, Stephan; Kettenbach, Joachim; Kosan, Bora; Aebert, Hermann; Schernthaner, Melanie; Kroeber, Stefan-Martin; Boemches, Andrea; Claussen, Claus D.; Pereira, Philippe L.

    2009-01-01

    Acute pneumothorax is a frequent complication after percutaneous pulmonary radiofrequency (RF) ablation. In this study we present three cases showing delayed development of pneumothorax after pulmonary RF ablation in 34 patients. Our purpose is to draw attention to this delayed complication and to propose a possible approach to avoid this major complication. These three cases occurred subsequent to 44 CT-guided pulmonary RF ablation procedures (6.8%) using either internally cooled or multitined expandable RF electrodes. In two patients, the pneumothorax, being initially absent at the end of the intervention, developed without symptoms. One of these patients required chest drain placement 32 h after RF ablation, and in the second patient therapy remained conservative. In the third patient, a slight pneumothorax at the end of the intervention gradually increased and led into tension pneumothorax 5 days after ablation procedure. Underlying bronchopleural fistula along the coagulated former electrode track was diagnosed in two patients. In conclusion, delayed development of pneumothorax after pulmonary RF ablation can occur and is probably due to underlying bronchopleural fistula, potentially leading to tension pneumothorax. Patients and interventionalists should be prepared for delayed onset of this complication, and extensive track ablation following pulmonary RF ablation should be avoided.

  1. Development of a statistically based access delay timeline methodology.

    Energy Technology Data Exchange (ETDEWEB)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

    2013-02-01

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversarys task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

  2. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: II. Validity Studies of the Pause Marker

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The purpose of this 2nd article in this supplement is to report validity support findings for the Pause Marker (PM), a proposed single-sign diagnostic marker of childhood apraxia of speech (CAS). Method: PM scores and additional perceptual and acoustic measures were obtained from 296 participants in cohorts with idiopathic and…

  3. LinguaTag: an Emotional Speech Analysis Application

    OpenAIRE

    Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros

    2008-01-01

    The analysis of speech, particularly for emotional content, is an open area of current research. Ongoing work has developed an emotional speech corpus for analysis, and defined a vowel stress method by which this analysis may be performed. This paper documents the development of LinguaTag, an open source speech analysis software application which implements this vowel stress emotional speech analysis method developed as part of research into the acoustic and linguistic correlates of emotional...

  4. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  5. The Role of Speech-Gesture Congruency and Delay in Remembering Action Events

    Science.gov (United States)

    Galati, Alexia; Samuel, Arthur G.

    2011-01-01

    When watching others describe events, does information from their speech and gestures affect our memory representations for the gist and surface form of the described events? Does our reliance on these memory representations change over time? Forty participants watched videos of stories narrated by an actor. Each story included three target events…

  6. The Effect of a Voice Activity Detector on the Speech Enhancement

    DEFF Research Database (Denmark)

    Dau, Torsten; Catic, Jasmina; Buchholz, Jörg

    2010-01-01

    A multimicrophone speech enhancement algorithm for binaural hearing aids that preserves interaural time delays was proposed recently. The algorithm is based on multichannel Wiener filtering and relies on a voice activity detector (VAD) for estimation of second-order statistics. Here, the effect...... of a VAD on the speech enhancement of this algorithm was evaluated using an envelopebased VAD, and the performance was compared to that achieved using an ideal error-free VAD. The performance was considered for stationary directional noise and nonstationary diffuse noise interferers at input SNRs from −10...

  7. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. The Comorbidity between Attention-Deficit/Hyperactivity Disorder (ADHD in Children and Arabic Speech Sound Disorder

    Directory of Open Access Journals (Sweden)

    Ruaa Osama Hariri

    2016-04-01

    Full Text Available Children with Attention-Deficiency/Hyperactive Disorder (ADHD often have co-existing learning disabilities and developmental weaknesses or delays in some areas including speech (Rief, 2005. Seeing that phonological disorders include articulation errors and other forms of speech disorders, studies pertaining to children with ADHD symptoms who demonstrate signs of phonological disorders in their native Arabic language are lacking. The purpose of this study is to provide a description of Arabic language deficits and to present a theoretical model of potential associations between phonological language deficits and ADHD. Dodd and McCormack’s (1995 four subgroups classification of speech disorder and the phonological disorders pertaining to the Arabic language provided by a Saudi Institute for Speech and Hearing are examined within the theoretical framework. Since intervention may improve articulation and focuses a child’s attention on the sound structure of words, findings in this study are based on the assumption that children with ADHD may acquire phonology for their Arabic language in the same way, and following the same developmental stages as intelligible children. Both quantitative and qualitative analyses have proven that the ADHD group analyzed in this study had indeed failed to acquire most of their Arabic consonants as they should have. Keywords: speech sound disorder, attention-deficiency/hyperactive, developmental disorder, phonological disorder, language disorder/delay, language impairment

  9. PRACTICING SPEECH THERAPY INTERVENTION FOR SOCIAL INTEGRATION OF CHILDREN WITH SPEECH DISORDERS

    Directory of Open Access Journals (Sweden)

    Martin Ofelia POPESCU

    2016-11-01

    Full Text Available The article presents a concise speech correction intervention program in of dyslalia in conjunction with capacity development of intra, interpersonal and social integration of children with speech disorders. The program main objectives represent: the potential increasing of individual social integration by correcting speech disorders in conjunction with intra- and interpersonal capacity, the potential growth of children and community groups for social integration by optimizing the socio-relational context of children with speech disorder. In the program were included 60 children / students with dyslalia speech disorders (monomorphic and polymorphic dyslalia, from 11 educational institutions - 6 kindergartens and 5 schools / secondary schools, joined with inter-school logopedic centre (CLI from Targu Jiu city and areas of Gorj district. The program was implemented under the assumption that therapeutic-formative intervention to correct speech disorders and facilitate the social integration will lead, in combination with correct pronunciation disorders, to social integration optimization of children with speech disorders. The results conirm the hypothesis and gives facts about the intervention program eficiency.

  10. Intervention for bilingual speech sound disorders: A case study of an isiXhosa-English-speaking child.

    Science.gov (United States)

    Rossouw, Kate; Pascoe, Michelle

    2018-03-19

     Bilingualism is common in South Africa, with many children acquiring isiXhosa as a home language and learning English from a young age in nursery or crèche. IsiXhosa is a local language, part of the Bantu language family, widely spoken in the country. Aims: To describe changes in a bilingual child's speech following intervention based on a theoretically motivated and tailored intervention plan. Methods and procedures: This study describes a female isiXhosa-English bilingual child, named Gcobisa (pseudonym) (chronological age 4 years and 2 months) with a speech sound disorder. Gcobisa's speech was assessed and her difficulties categorised according to Dodd's (2005) diagnostic framework. From this, intervention was planned and the language of intervention was selected. Following intervention, Gcobisa's speech was reassessed. Outcomes and results: Gcobisa's speech was categorised as a consistent phonological delay as she presented with gliding of/l/in both English and isiXhosa, cluster reduction in English and several other age appropriate phonological processes. She was provided with 16 sessions of intervention using a minimal pairs approach, targeting the phonological process of gliding of/l/, which was not considered age appropriate for Gcobisa in isiXhosa when compared to the small set of normative data regarding monolingual isiXhosa development. As a result, the targets and stimuli were in isiXhosa while the main language of instruction was English. This reflects the language mismatch often faced by speech language therapists in South Africa. Gcobisa showed evidence of generalising the target phoneme to English words. Conclusions and implications: The data have theoretical implications regarding bilingual development of isiXhosa-English, as it highlights the ways bilingual development may differ from the monolingual development of this language pair. It adds to the small set of intervention studies investigating the changes in the speech of bilingual

  11. Intervention for bilingual speech sound disorders: A case study of an isiXhosa–English-speaking child

    Directory of Open Access Journals (Sweden)

    Kate Rossouw

    2018-03-01

    Full Text Available Background: Bilingualism is common in South Africa, with many children acquiring isiXhosa as a home language and learning English from a young age in nursery or crèche. IsiXhosa is a local language, part of the Bantu language family, widely spoken in the country.   Aims: To describe changes in a bilingual child’s speech following intervention based on a theoretically motivated and tailored intervention plan.   Methods and procedures: This study describes a female isiXhosa–English bilingual child, named Gcobisa (pseudonym (chronological age 4 years and 2 months with a speech sound disorder. Gcobisa’s speech was assessed and her difficulties categorised according to Dodd’s (2005 diagnostic framework. From this, intervention was planned and the language of intervention was selected. Following intervention, Gcobisa’s speech was reassessed.   Outcomes and results: Gcobisa’s speech was categorised as a consistent phonological delay as she presented with gliding of/l/in both English and isiXhosa, cluster reduction in English and several other age appropriate phonological processes. She was provided with 16 sessions of intervention using a minimal pairs approach, targeting the phonological process of gliding of/l/, which was not considered age appropriate for Gcobisa in isiXhosa when compared to the small set of normative data regarding monolingual isiXhosa development. As a result, the targets and stimuli were in isiXhosa while the main language of instruction was English. This reflects the language mismatch often faced by speech language therapists in South Africa. Gcobisa showed evidence of generalising the target phoneme to English words.   Conclusions and implications: The data have theoretical implications regarding bilingual development of isiXhosa–English, as it highlights the ways bilingual development may differ from the monolingual development of this language pair. It adds to the small set of intervention studies

  12. [Speech perception development in children with dyslexia].

    Science.gov (United States)

    Ortiz, Rosario; Jiménez, Juan E; Muñetón, Mercedes; Rojas, Estefanía; Estévez, Adelina; Guzmán, Remedios; Rodríguez, Cristina; Naranjo, Francisco

    2008-11-01

    Several studies have indicated that dyslexics show a deficit in speech perception (SP). The main purpose of this research is to determine the development of SP in dyslexics and normal readers paired by grades from 2nd to 6th grade of primary school and to know whether the phonetic contrasts that are relevant for SP change during development, taking into account the individual differences. The achievement of both groups was compared in the phonetic tasks: voicing contrast, place of articulation contrast and manner of articulation contrast. The results showed that the dyslexic performed poorer than the normal readers in SP. In place of articulation contrast, the developmental pattern is similar in both groups but not in voicing and manner of articulation. Manner of articulation has more influence on SP, and its development is higher than the other contrast tasks in both groups.

  13. Integrating Music Therapy Services and Speech-Language Therapy Services for Children with Severe Communication Impairments: A Co-Treatment Model

    Science.gov (United States)

    Geist, Kamile; McCarthy, John; Rodgers-Smith, Amy; Porter, Jessica

    2008-01-01

    Documenting how music therapy can be integrated with speech-language therapy services for children with communication delay is not evident in the literature. In this article, a collaborative model with procedures, experiences, and communication outcomes of integrating music therapy with the existing speech-language services is given. Using…

  14. Audiovisual speech perception development at varying levels of perceptual processing

    OpenAIRE

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the le...

  15. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    Science.gov (United States)

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  16. Speech and language development in toddlers with and without cleft palate

    NARCIS (Netherlands)

    Priester, G. H.; Goorhuis-Brouwer, S. M.

    Objective: The effect of early palate closure on speech and language development in children with cleft palate. Design: Comparative study. Setting: University Medical Center Groningen, Cleft Palate Team (The Netherlands). Materials and methods: Forty-three toddlers with cleft palate and thirty-two

  17. Collective speech acts

    NARCIS (Netherlands)

    Meijers, A.W.M.; Tsohatzidis, S.L.

    2007-01-01

    From its early development in the 1960s, speech act theory always had an individualistic orientation. It focused exclusively on speech acts performed by individual agents. Paradigmatic examples are ‘I promise that p’, ‘I order that p’, and ‘I declare that p’. There is a single speaker and a single

  18. Development of a speech-based dialogue system for report dictation and machine control in the endoscopic laboratory.

    Science.gov (United States)

    Molnar, B; Gergely, J; Toth, G; Pronai, L; Zagoni, T; Papik, K; Tulassay, Z

    2000-01-01

    Reporting and machine control based on speech technology can enhance work efficiency in the gastrointestinal endoscopy laboratory. The status and activation of endoscopy laboratory equipment were described as a multivariate parameter and function system. Speech recognition, text evaluation and action definition engines were installed. Special programs were developed for the grammatical analysis of command sentences, and a rule-based expert system for the definition of machine answers. A speech backup engine provides feedback to the user. Techniques were applied based on the "Hidden Markov" model of discrete word, user-independent speech recognition and on phoneme-based speech synthesis. Speech samples were collected from three male low-tone investigators. The dictation module and machine control modules were incorporated in a personal computer (PC) simulation program. Altogether 100 unidentified patient records were analyzed. The sentences were grouped according to keywords, which indicate the main topics of a gastrointestinal endoscopy report. They were: "endoscope", "esophagus", "cardia", "fundus", "corpus", "antrum", "pylorus", "bulbus", and "postbulbar section", in addition to the major pathological findings: "erosion", "ulceration", and "malignancy". "Biopsy" and "diagnosis" were also included. We implemented wireless speech communication control commands for equipment including an endoscopy unit, video, monitor, printer, and PC. The recognition rate was 95%. Speech technology may soon become an integrated part of our daily routine in the endoscopy laboratory. A central speech and laboratory computer could be the most efficient alternative to having separate speech recognition units in all items of equipment.

  19. Speech and Speech-Related Quality of Life After Late Palate Repair: A Patient's Perspective.

    Science.gov (United States)

    Schönmeyr, Björn; Wendby, Lisa; Sharma, Mitali; Jacobson, Lia; Restrepo, Carolina; Campbell, Alex

    2015-07-01

    Many patients with cleft palate deformities worldwide receive treatment at a later age than is recommended for normal speech to develop. The outcomes after late palate repairs in terms of speech and quality of life (QOL) still remain largely unstudied. In the current study, questionnaires were used to assess the patients' perception of speech and QOL before and after primary palate repair. All of the patients were operated at a cleft center in northeast India and had a cleft palate with a normal lip or with a cleft lip that had been previously repaired. A total of 134 patients (7-35 years) were interviewed preoperatively and 46 patients (7-32 years) were assessed in the postoperative survey. The survey showed that scores based on the speech handicap index, concerning speech and speech-related QOL, did not improve postoperatively. In fact, the questionnaires indicated that the speech became more unpredictable (P reported that their self-confidence had improved after the operation. Thus, the majority of interviewed patients who underwent late primary palate repair were satisfied with the surgery. At the same time, speech and speech-related QOL did not improve according to the speech handicap index-based survey. Speech predictability may even become worse and nasal regurgitation may increase after late palate repair, according to these results.

  20. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  1. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  2. Speech and non-speech processing in children with phonological disorders: an electrophysiological study

    Directory of Open Access Journals (Sweden)

    Isabela Crivellaro Gonçalves

    2011-01-01

    Full Text Available OBJECTIVE: To determine whether neurophysiological auditory brainstem responses to clicks and repeated speech stimuli differ between typically developing children and children with phonological disorders. INTRODUCTION: Phonological disorders are language impairments resulting from inadequate use of adult phonological language rules and are among the most common speech and language disorders in children (prevalence: 8 - 9%. Our hypothesis is that children with phonological disorders have basic differences in the way that their brains encode acoustic signals at brainstem level when compared to normal counterparts. METHODS: We recorded click and speech evoked auditory brainstem responses in 18 typically developing children (control group and in 18 children who were clinically diagnosed with phonological disorders (research group. The age range of the children was from 7-11 years. RESULTS: The research group exhibited significantly longer latency responses to click stimuli (waves I, III and V and speech stimuli (waves V and A when compared to the control group. DISCUSSION: These results suggest that the abnormal encoding of speech sounds may be a biological marker of phonological disorders. However, these results cannot define the biological origins of phonological problems. We also observed that speech-evoked auditory brainstem responses had a higher specificity/sensitivity for identifying phonological disorders than click-evoked auditory brainstem responses. CONCLUSIONS: Early stages of the auditory pathway processing of an acoustic stimulus are not similar in typically developing children and those with phonological disorders. These findings suggest that there are brainstem auditory pathway abnormalities in children with phonological disorders.

  3. Speech of people with autism: Echolalia and echolalic speech

    OpenAIRE

    Błeszyński, Jacek Jarosław

    2013-01-01

    Speech of people with autism is recognised as one of the basic diagnostic, therapeutic and theoretical problems. One of the most common symptoms of autism in children is echolalia, described here as being of different types and severity. This paper presents the results of studies into different levels of echolalia, both in normally developing children and in children diagnosed with autism, discusses the differences between simple echolalia and echolalic speech - which can be considered to b...

  4. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  5. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  6. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  7. Accelerometer-based automatic voice onset detection in speech mapping with navigated repetitive transcranial magnetic stimulation.

    Science.gov (United States)

    Vitikainen, Anne-Mari; Mäkelä, Elina; Lioumis, Pantelis; Jousmäki, Veikko; Mäkelä, Jyrki P

    2015-09-30

    The use of navigated repetitive transcranial magnetic stimulation (rTMS) in mapping of speech-related brain areas has recently shown to be useful in preoperative workflow of epilepsy and tumor patients. However, substantial inter- and intraobserver variability and non-optimal replicability of the rTMS results have been reported, and a need for additional development of the methodology is recognized. In TMS motor cortex mappings the evoked responses can be quantitatively monitored by electromyographic recordings; however, no such easily available setup exists for speech mappings. We present an accelerometer-based setup for detection of vocalization-related larynx vibrations combined with an automatic routine for voice onset detection for rTMS speech mapping applying naming. The results produced by the automatic routine were compared with the manually reviewed video-recordings. The new method was applied in the routine navigated rTMS speech mapping for 12 consecutive patients during preoperative workup for epilepsy or tumor surgery. The automatic routine correctly detected 96% of the voice onsets, resulting in 96% sensitivity and 71% specificity. Majority (63%) of the misdetections were related to visible throat movements, extra voices before the response, or delayed naming of the previous stimuli. The no-response errors were correctly detected in 88% of events. The proposed setup for automatic detection of voice onsets provides quantitative additional data for analysis of the rTMS-induced speech response modifications. The objectively defined speech response latencies increase the repeatability, reliability and stratification of the rTMS results. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Delays in clinical development of neurological drugs in Japan.

    Science.gov (United States)

    Ikeda, Masayuki

    2017-06-28

    The delays in the approval and development of neurological drugs between Japan and other countries have been a major issue for patients with neurological diseases. The objective of this study was to analyze factors contributing to the delay in the launching of neurological drugs in Japan. We analyzed data from Japan and the US for the approval of 42 neurological drugs, all of which were approved earlier in the US than in Japan, and examined the potential factors that may cause the delay of their launch. Introductions of the 42 drugs in Japan occurred at a median of 87 months after introductions in the US. The mean review time of new drug applications for the 20 drugs introduced in Japan in January 2011 or later (15 months) was significantly shorter than that for the other 22 drugs introduced in Japan in December 2010 or earlier (24 months). The lag in the Japan's review time behind the US could not explain the approval delays. In the 31 of the 42 drugs, the application data package included overseas data. The mean review time of these 31 drugs (17 months) was significantly shorter than that of the other 11 drugs without overseas data (26 months). The mean approval lag behind the US of the 31 drugs (78 months) was also significantly shorter than that of the other 11 drugs (134 months). These results show that several important reforms in the Japanese drug development and approval system (e.g., inclusion of global clinical trial data) have reduced the delays in the clinical development of neurological drugs.

  9. Packet loss replacement in voip using a recursive low-order autoregressive modelbased speech

    International Nuclear Information System (INIS)

    Miralavi, Seyed Reza; Ghorshi, Seyed; Mortazavi, Mohammad; Choupan, Jeiran

    2011-01-01

    In real-time packet-based communication systems one major problem is misrouted or delayed packets which results in degraded perceived voice quality. When some speech packets are not available on time, the packet is known as lost packet in real-time communication systems. The easiest task of a network terminal receiver is to replace silence for the duration of lost speech segments. In a high quality communication system in order to avoid quality reduction due to packet loss a suitable method and/or algorithm is needed to replace the missing segments of speech. In this paper, we introduce a recursive low order autoregressive (AR) model for replacement of lost speech segment. The evaluation results show that this method has a lower mean square error (MSE) and low complexity compared to the other efficient methods like high-order AR model without any substantial degradation in perceived voice quality.

  10. Research and development of a versatile portable speech prosthesis

    Science.gov (United States)

    1981-01-01

    The Versatile Portable Speech Prosthesis (VPSP), a synthetic speech output communication aid for non-speaking people is described. It was intended initially for severely physically limited people with cerebral palsy who are in electric wheelchairs. Hence, it was designed to be placed on a wheelchair and powered from a wheelchair battery. It can easily be separated from the wheelchair. The VPSP is versatile because it is designed to accept any means of single switch, multiple switch, or keyboard control which physically limited people have the ability to use. It is portable because it is mounted on and can go with the electric wheelchair. It is a speech prosthesis, obviously, because it speaks with a synthetic voice for people unable to speak with their own voices. Both hardware and software are described.

  11. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  12. The chairman's speech

    International Nuclear Information System (INIS)

    Allen, A.M.

    1986-01-01

    The paper contains a transcript of a speech by the chairman of the UKAEA, to mark the publication of the 1985/6 annual report. The topics discussed in the speech include: the Chernobyl accident and its effect on public attitudes to nuclear power, management and disposal of radioactive waste, the operation of UKAEA as a trading fund, and the UKAEA development programmes. The development programmes include work on the following: fast reactor technology, thermal reactors, reactor safety, health and safety aspects of water cooled reactors, the Joint European Torus, and under-lying research. (U.K.)

  13. Robust Frequency Invariant Beamforming with Low Sidelobe for Speech Enhancement

    Science.gov (United States)

    Zhu, Yiting; Pan, Xiang

    2018-01-01

    Frequency invariant beamformers (FIBs) are widely used in speech enhancement and source localization. There are two traditional optimization methods for FIB design. The first one is convex optimization, which is simple but the frequency invariant characteristic of the beam pattern is poor with respect to frequency band of five octaves. The least squares (LS) approach using spatial response variation (SRV) constraint is another optimization method. Although, it can provide good frequency invariant property, it usually couldn’t be used in speech enhancement for its lack of weight norm constraint which is related to the robustness of a beamformer. In this paper, a robust wideband beamforming method with a constant beamwidth is proposed. The frequency invariant beam pattern is achieved by resolving an optimization problem of the SRV constraint to cover speech frequency band. With the control of sidelobe level, it is available for the frequency invariant beamformer (FIB) to prevent distortion of interference from the undesirable direction. The approach is completed in time-domain by placing tapped delay lines(TDL) and finite impulse response (FIR) filter at the output of each sensor which is more convenient than the Frost processor. By invoking the weight norm constraint, the robustness of the beamformer is further improved against random errors. Experiment results show that the proposed method has a constant beamwidth and almost the same white noise gain as traditional delay-and-sum (DAS) beamformer.

  14. Attention and Word Learning in Autistic, Language Delayed and Typically Developing Children

    Directory of Open Access Journals (Sweden)

    Elena eTenenbaum

    2014-05-01

    Full Text Available Previous work has demonstrated that patterns of social attention hold predictive value for language development in typically developing infants. The goal of this research was to explore how patterns of attention in autistic, language delayed, and typically developing children relate to early word learning and language abilities. We tracked patterns of eye movements to faces and objects while children watched videos of a woman teaching them a series of new words. Subsequent test trials measured participants’ recognition of these novel word-object pairings. Results indicated that greater attention to the speaker’s mouth was related to higher scores on standardized measures of language development for autistic and typically developing children (but not for language delayed children. This effect was mediated by age for typically developing, but not autistic children. When effects of age were controlled for, attention to the mouth among language delayed participants was negatively correlated with standardized measures of language learning. Attention to the speaker’s mouth and eyes while she was teaching the new words was also predictive of faster recognition of the newly learned words among autistic children. These results suggest that language delays among children with autism may be driven in part by aberrant social attention, and that the mechanisms underlying these delays may differ from those in language delayed participants without autism.

  15. The Effect of Furlow Palatoplasty Timing on Speech Outcomes in Submucous Cleft Palate.

    Science.gov (United States)

    Swanson, Jordan W; Mitchell, Brianne T; Cohen, Marilyn; Solot, Cynthia; Jackson, Oksana; Low, David; Bartlett, Scott P; Taylor, Jesse A

    2017-08-01

    Because some patients with submucous cleft palate (SMCP) are asymptomatic, surgical treatment is conventionally delayed until hypernasal resonance is identified during speech production. We aim to identify whether speech outcomes after repair of a SMCP is influenced by age of repair. We retrospectively studied nonsyndromic children with SMCP. Speech results, before and after any surgical treatment or physical management of the palate were compared using the Pittsburgh Weighted Speech Scoring system. Furlow palatoplasty was performed on 40 nonsyndromic patients with SMCP, and 26 patients were not surgically treated. Total composite speech scores improved significantly among children repaired between 3 and 4 years of age (P = 0.02), but not older than 4 years (P = 0.63). Twelve (86%) of 14 patients repaired who are older than 4 years had borderline or incompetent speech (composite Pittsburgh Weighted Speech Scoring ≥3) compared with 2 (29%) of 7 repaired between 3 and 4 years of age (P = 0.0068), despite worse prerepair scores in the latter group. Resonance improved in children repaired who are older than 4 years, but articulation errors persisted to a greater degree than those treated before 4 years of age (P = 0.01.) CONCLUSIONS: Submucous cleft palate repair before 4 years of age appears associated with lower ultimate rates of borderline or incompetent speech. Speech of patients repaired at or after 4 years of age seems to be characterized by persistent misarticulation. These findings highlight the importance of timely diagnosis and management.

  16. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  17. Look Who's Talking: Speech Style and Social Context in Language Input to Infants Are Linked to Concurrent and Future Speech Development

    Science.gov (United States)

    Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.

    2014-01-01

    Language input is necessary for language learning, yet little is known about whether, in natural environments, the speech style and social context of language input to children impacts language development. In the present study we investigated the relationship between language input and language development, examining both the style of parental…

  18. Expanding the phenotypic profile of Kleefstra syndrome: A female with low-average intelligence and childhood apraxia of speech.

    Science.gov (United States)

    Samango-Sprouse, Carole; Lawson, Patrick; Sprouse, Courtney; Stapleton, Emily; Sadeghin, Teresa; Gropman, Andrea

    2016-05-01

    Kleefstra syndrome (KS) is a rare neurogenetic disorder most commonly caused by deletion in the 9q34.3 chromosomal region and is associated with intellectual disabilities, severe speech delay, and motor planning deficits. To our knowledge, this is the first patient (PQ, a 6-year-old female) with a 9q34.3 deletion who has near normal intelligence, and developmental dyspraxia with childhood apraxia of speech (CAS). At 6, the Wechsler Preschool and Primary Intelligence testing (WPPSI-III) revealed a Verbal IQ of 81 and Performance IQ of 79. The Beery Buktenica Test of Visual Motor Integration, 5th Edition (VMI) indicated severe visual motor deficits: VMI = 51; Visual Perception = 48; Motor Coordination explanation for the previously reported speech delay and expressive language disorder. Further research is warranted on the impact of CAS on intelligence and behavioral outcome in KS. Therapeutic and prognostic implications are discussed. © 2016 Wiley Periodicals, Inc.

  19. Speech neglect: A strange educational blind spot

    Science.gov (United States)

    Harris, Katherine Safford

    2005-09-01

    Speaking is universally acknowledged as an important human talent, yet as a topic of educated common knowledge, it is peculiarly neglected. Partly, this is a consequence of the relatively recent growth of research on speech perception, production, and development, but also a function of the way that information is sliced up by undergraduate colleges. Although the basic acoustic mechanism of vowel production was known to Helmholtz, the ability to view speech production as a physiological event is evolving even now with such techniques as fMRI. Intensive research on speech perception emerged only in the early 1930s as Fletcher and the engineers at Bell Telephone Laboratories developed the transmission of speech over telephone lines. The study of speech development was revolutionized by the papers of Eimas and his colleagues on speech perception in infants in the 1970s. Dissemination of knowledge in these fields is the responsibility of no single academic discipline. It forms a center for two departments, Linguistics, and Speech and Hearing, but in the former, there is a heavy emphasis on other aspects of language than speech and, in the latter, a focus on clinical practice. For psychologists, it is a rather minor component of a very diverse assembly of topics. I will focus on these three fields in proposing possible remedies.

  20. A MEDICAL APPROACH TO LANGUAGE DELAY

    African Journals Online (AJOL)

    Enrique

    The evaluation of speech development in a child requires a range of skills embodied ... need to understand the terminology used by speech therapists in order to facili- .... her voice. Infant responds consonantal. Pain and speech. Ability to turn.

  1. Issues in developing valid assessments of speech pathology students' performance in the workplace.

    Science.gov (United States)

    McAllister, Sue; Lincoln, Michelle; Ferguson, Alison; McAllister, Lindy

    2010-01-01

    Workplace-based learning is a critical component of professional preparation in speech pathology. A validated assessment of this learning is seen to be 'the gold standard', but it is difficult to develop because of design and validation issues. These issues include the role and nature of judgement in assessment, challenges in measuring quality, and the relationship between assessment and learning. Valid assessment of workplace-based performance needs to capture the development of competence over time and account for both occupation specific and generic competencies. This paper reviews important conceptual issues in the design of valid and reliable workplace-based assessments of competence including assessment content, process, impact on learning, measurement issues, and validation strategies. It then goes on to share what has been learned about quality assessment and validation of a workplace-based performance assessment using competency-based ratings. The outcomes of a four-year national development and validation of an assessment tool are described. A literature review of issues in conceptualizing, designing, and validating workplace-based assessments was conducted. Key factors to consider in the design of a new tool were identified and built into the cycle of design, trialling, and data analysis in the validation stages of the development process. This paper provides an accessible overview of factors to consider in the design and validation of workplace-based assessment tools. It presents strategies used in the development and national validation of a tool COMPASS, used in an every speech pathology programme in Australia, New Zealand, and Singapore. The paper also describes Rasch analysis, a model-based statistical approach which is useful for establishing validity and reliability of assessment tools. Through careful attention to conceptual and design issues in the development and trialling of workplace-based assessments, it has been possible to develop the

  2. Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers.

    Science.gov (United States)

    Schaadt, Gesa; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Männel, Claudia

    2018-01-17

    During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Music and Speech Perception in Children Using Sung Speech.

    Science.gov (United States)

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  4. Apraxia of Speech

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  5. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  6. Intranasal insulin to improve developmental delay in children with 22q13 deletion syndrome: an exploratory clinical trial

    OpenAIRE

    Schmidt, Heinrich; Giese, Renate; Enders, Angelika; Kern, W.; Hallschmid, M.

    2009-01-01

    Background: The 22q13 deletion syndrome (Phelan– McDermid syndrome) is characterised by a global developmental delay, absent or delayed speech, generalised hypotonia, autistic behaviour and characteristic phenotypic features. Intranasal insulin has been shown to improve declarative memory in healthy adult subjects and in patients with Alzheimer disease. Aims: To assess if intranasal insulin is also able to improve the developmental delay in children with 22q13 delet...

  7. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  8. Speech Intelligibility Evaluation for Mobile Phones

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Cubick, Jens; Dau, Torsten

    2015-01-01

    In the development process of modern telecommunication systems, such as mobile phones, it is common practice to use computer models to objectively evaluate the transmission quality of the system, instead of time-consuming perceptual listening tests. Such models have typically focused on the quality...... of the transmitted speech, while little or no attention has been provided to speech intelligibility. The present study investigated to what extent three state-of-the art speech intelligibility models could predict the intelligibility of noisy speech transmitted through mobile phones. Sentences from the Danish...... Dantale II speech material were mixed with three different kinds of background noise, transmitted through three different mobile phones, and recorded at the receiver via a local network simulator. The speech intelligibility of the transmitted sentences was assessed by six normal-hearing listeners...

  9. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  10. Sector-Based Detection for Hands-Free Speech Enhancement in Cars

    Directory of Open Access Journals (Sweden)

    Bourgeois Julien

    2006-01-01

    Full Text Available Adaptation control of beamforming interference cancellation techniques is investigated for in-car speech acquisition. Two efficient adaptation control methods are proposed that avoid target cancellation. The "implicit" method varies the step-size continuously, based on the filtered output signal. The "explicit" method decides in a binary manner whether to adapt or not, based on a novel estimate of target and interference energies. It estimates the average delay-sum power within a volume of space, for the same cost as the classical delay-sum. Experiments on real in-car data validate both methods, including a case with km/h background road noise.

  11. Speech on the general states of enterprises and the sustainable development

    International Nuclear Information System (INIS)

    2006-01-01

    In this speech the author points out two main recommendations. The first message concerns the necessity of a whole mobilization in favor of the sustainable development, from the government policy and the enterprises management to the human behavior. He presents then three main axis to heighten the enterprises (reinforce the information on the environmental and social impact of the economic activities, the development of sustainable investments, the development of the environmental sponsorship). The second message concerns the necessity to place the environment in the economic growth by the development of the ecology and the eco-technology. (A.L.B.)

  12. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  13. On speech recognition during anaesthesia

    DEFF Research Database (Denmark)

    Alapetite, Alexandre

    2007-01-01

    This PhD thesis in human-computer interfaces (informatics) studies the case of the anaesthesia record used during medical operations and the possibility to supplement it with speech recognition facilities. Problems and limitations have been identified with the traditional paper-based anaesthesia...... and inaccuracies in the anaesthesia record. Supplementing the electronic anaesthesia record interface with speech input facilities is proposed as one possible solution to a part of the problem. The testing of the various hypotheses has involved the development of a prototype of an electronic anaesthesia record...... interface with speech input facilities in Danish. The evaluation of the new interface was carried out in a full-scale anaesthesia simulator. This has been complemented by laboratory experiments on several aspects of speech recognition for this type of use, e.g. the effects of noise on speech recognition...

  14. Mnemonic abilities of primary school children with delayed mental development.

    Directory of Open Access Journals (Sweden)

    Murafa S.V.

    2015-07-01

    Full Text Available This paper presents the results of research regarding the mnemonic abilities of primary school children with developmental delays. Empirical studies of impaired mental development offer an opportunity to elucidate the psychological mechanisms underlying the process of normal development and enable us to consider at a micro level the formation of mental processes in ontogeny, which would, under normal conditions, be nondescript and not always amenable to psychological analysis. The research addresses an experimental investigation of productivity and qualitative characteristics of mnemonic abilities among primary school students with developmental delays. V.D. Shadrikov’s Theory of Abilities, developed in a systemic approach framework, is the theoretical basis of the research. The method of deploying a memorization activity, as elaborated by V.D. Shadrikov and L.V. Cheremoshkina, was the investigation tool used. The sample included students in grades 1 to 4 between ages 7 to 12 and included a total of 100 children (66 boys and 34 girls. The control group of primary school students with typical development included 105 children (50 boys and 55 girls. The research consisted of several stages: a pilot study, experimental research (the test task was to memorize card #1; the basic task was to memorize cards #2 and #3; to reproduce cards #2 and #3; and to poll the students, mathematical data processing, and a description of the levels of mnemonic ability development among primary students with developmental delays. The following procedures were employed during statistical analysis: Spearman r3, Mann-Whitney U-test, Jonckheere-Terpstra test, and Kruskal-Wallis test. The structure of mnemonic abilities in primary schoolchildren with developmental delays was determined to vary according to the underdevelopment of their operational mechanisms. For example, memory functions are based on the use of inborn mechanisms, and a portion of children differ in the

  15. Follow-up of premature children with high risk for growth and development delay: a multiprofessional assessment

    Directory of Open Access Journals (Sweden)

    Marcia de Freitas

    2010-06-01

    Full Text Available Objective: To describe the activities of a multiprofessional outpatient clinic performed by neonatologist, physiatrist, physical therapist, occupational therapist, speech therapist, audiologist and psychologist, who evaluated the development of premature newborns. Methods: Twenty children born at a tertiary-care hospital (São Paulo, Brazil, between April 2006 and April 2007, with birth weight below 1250 g or less than 32 weeks of gestation, were evaluated. The multiprofessional evaluation included assessment of development using the Bayley III scale, at the corrected age of 3, 6, 9, 12, 18 and 24 months. Results: The mean gestation age at birth was 28.8 weeks; mean birth weight was 1055 g. The mean maternal age was 35 years and the mean length of stay of neonates was 46.3 days. Fifteen percent of children presented impaired sensory motor skills, 20% had hearing abnormalities and 10% motor alterations. Bayley III showed alterations in the communication area in 10% of subjects and in the motor area in 10% of individuals. The parents were oriented to stimulate the child or a specific intervention was suggested. The major development delay was observed between 6 and 18 months of age and the development was improved at 24 months of age. Conclusions: Most children evaluated had improved growth and development at 24 corrected-age months. Further studies with a larger sample are recommended, as well as the possibility to follow this population group up till the primary school.

  16. Specific features of the Galician language and implications for speech technology development

    OpenAIRE

    2008-01-01

    Specific features of the Galician language and implications for speech technology development correspondence: Corresponding author. (Banga, Eduardo Rodriguez) (Banga, Eduardo Rodriguez) Dpto. Filoloxia Galega. Universidade de Santiago. Santiago de Compostela. Spain - (Gonzalez, Manuel Gonzalez) Dpto. Teoria de la Se?al y Comunicaciones. Universidad de Vigo. Vigo. Spain - (Banga, Eduardo Rodriguez) SPAIN (Banga...

  17. Intranasal insulin to improve developmental delay in children with 22q13 deletion syndrome: an exploratory clinical trial.

    Science.gov (United States)

    Schmidt, H; Kern, W; Giese, R; Hallschmid, M; Enders, A

    2009-04-01

    The 22q13 deletion syndrome (Phelan-McDermid syndrome) is characterised by a global developmental delay, absent or delayed speech, generalised hypotonia, autistic behaviour and characteristic phenotypic features. Intranasal insulin has been shown to improve declarative memory in healthy adult subjects and in patients with Alzheimer disease. To assess if intranasal insulin is also able to improve the developmental delay in children with 22q13 deletion syndrome. We performed exploratory clinical trials in six children with 22q13 deletion syndrome who received intranasal insulin over a period of 1 year. Short-term (during the first 6 weeks) and long-term effects (after 12 months of treatment) on motor skills, cognitive functions, or autonomous functions, speech and communication, emotional state, social behaviour, behavioural disorders, independence in daily living and education were assessed. The children showed marked short-term improvements in gross and fine motor activities, cognitive functions and educational level. Positive long-term effects were found for fine and gross motor activities, nonverbal communication, cognitive functions and autonomy. Possible side effects were found in one patient who displayed changes in balance, extreme sensitivity to touch and general loss of interest. One patient complained of intermittent nose bleeding. We conclude that long-term administration of intranasal insulin may benefit motor development, cognitive functions and spontaneous activity in children with 22q13 deletion syndrome.

  18. To Speak or Not to Speak: Developing Legal Standards for Anonymous Speech on the Internet

    Directory of Open Access Journals (Sweden)

    Tomas A. Lipinski

    2002-01-01

    Full Text Available This paper explores recent developments in the regulation of Internet speech, in specific, injurious or defamatory speech and the impact such speech has on the rights of anonymous speakers to remain anonymous as opposed to having their identity revealed to plaintiffs or other third parties. The paper proceeds in four sections.  First, a brief history of the legal attempts to regulate defamatory Internet speech in the United States is presented. As discussed below this regulation has altered the traditional legal paradigm of responsibility and as a result creates potential problems for the future of anonymous speech on the Internet.  As a result plaintiffs are no longer pursuing litigation against service providers but taking their dispute directly to the anonymous speaker. Second, several cases have arisen in the United States where plaintiffs have requested that the identity of an anonymous Internet speaker be revealed.  These cases are surveyed.  Third, the cases are analyzed in order to determine the factors that courts require to be present before the identity of an anonymous speaker will be revealed.  The release is typically accomplished by the enforcement of a discovery subpoena instigated by the party seeking the identity of the anonymous speaker. The factors courts have used are as follows: jurisdiction, good faith (both internal and external, necessity (basic and sometimes absolute, and at times proprietary interest. Finally, these factors are applied in three scenarios--e-commerce, education, and employment--to guide institutions when adopting policies that regulate when the identity of an anonymous speaker--a customer, a student or an employee--would be released as part of an internal initiative, but would nonetheless be consistent with developing legal standards.

  19. Lateralized speech perception with small interaural time differences in normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Locsei, Gusztav; Santurette, Sébastien; Dau, Torsten

    2017-01-01

    and two-talker babble in terms of SRTs, HI listeners could utilize ITDs to a similar degree as NH listeners to facilitate the binaural unmasking of speech. A slight difference was observed between the group means when target and maskers were separated from each other by large ITDs, but not when separated...... SRMs are elicited by small ITDs. Speech reception thresholds (SRTs) and SRM due to ITDs were measured over headphones for 10 young NH and 10 older HI listeners, who had normal or close-to-normal hearing below 1.5 kHz. Diotic target sentences were presented in diotic or dichotic speech-shaped noise...... or two-talker babble maskers. In the dichotic conditions, maskers were lateralized by delaying the masker waveforms in the left headphone channel. Multiple magnitudes of masker ITDs were tested in both noise conditions. Although deficits were observed in speech perception abilities in speechshaped noise...

  20. Particularities of Speech Readiness for Schooling in Pre-School Children Having General Speech Underdevelopment: A Social and Pedagogical Aspect

    Science.gov (United States)

    Emelyanova, Irina A.; Borisova, Elena A.; Shapovalova, Olga E.; Karynbaeva, Olga V.; Vorotilkina, Irina M.

    2018-01-01

    The relevance of the research is due to the necessity of creating the pedagogical conditions for correction and development of speech in children having the general speech underdevelopment. For them, difficulties generating a coherent utterance are characteristic, which prevents a sufficient speech readiness for schooling forming in them as well…

  1. Preschool speech intelligibility and vocabulary skills predict long-term speech and language outcomes following cochlear implantation in early childhood.

    Science.gov (United States)

    Castellanos, Irina; Kronenberger, William G; Beer, Jessica; Henning, Shirley C; Colson, Bethany G; Pisoni, David B

    2014-07-01

    Speech and language measures during grade school predict adolescent speech-language outcomes in children who receive cochlear implants (CIs), but no research has examined whether speech and language functioning at even younger ages is predictive of long-term outcomes in this population. The purpose of this study was to examine whether early preschool measures of speech and language performance predict speech-language functioning in long-term users of CIs. Early measures of speech intelligibility and receptive vocabulary (obtained during preschool ages of 3-6 years) in a sample of 35 prelingually deaf, early-implanted children predicted speech perception, language, and verbal working memory skills up to 18 years later. Age of onset of deafness and age at implantation added additional variance to preschool speech intelligibility in predicting some long-term outcome scores, but the relationship between preschool speech-language skills and later speech-language outcomes was not significantly attenuated by the addition of these hearing history variables. These findings suggest that speech and language development during the preschool years is predictive of long-term speech and language functioning in early-implanted, prelingually deaf children. As a result, measures of speech-language functioning at preschool ages can be used to identify and adjust interventions for very young CI users who may be at long-term risk for suboptimal speech and language outcomes.

  2. Speech Production and Speech Discrimination by Hearing-Impaired Children.

    Science.gov (United States)

    Novelli-Olmstead, Tina; Ling, Daniel

    1984-01-01

    Seven hearing impaired children (five to seven years old) assigned to the Speakers group made highly significant gains in speech production and auditory discrimination of speech, while Listeners made only slight speech production gains and no gains in auditory discrimination. Combined speech and auditory training was more effective than auditory…

  3. Development an Automatic Speech to Facial Animation Conversion for Improve Deaf Lives

    Directory of Open Access Journals (Sweden)

    S. Hamidreza Kasaei

    2011-05-01

    Full Text Available In this paper, we propose design and initial implementation of a robust system which can automatically translates voice into text and text to sign language animations. Sign Language
    Translation Systems could significantly improve deaf lives especially in communications, exchange of information and employment of machine for translation conversations from one language to another has. Therefore, considering these points, it seems necessary to study the speech recognition. Usually, the voice recognition algorithms address three major challenges. The first is extracting feature form speech and the second is when limited sound gallery are available for recognition, and the final challenge is to improve speaker dependent to speaker independent voice recognition. Extracting feature form speech is an important stage in our method. Different procedures are available for extracting feature form speech. One of the commonest of which used in speech
    recognition systems is Mel-Frequency Cepstral Coefficients (MFCCs. The algorithm starts with preprocessing and signal conditioning. Next extracting feature form speech using Cepstral coefficients will be done. Then the result of this process sends to segmentation part. Finally recognition part recognizes the words and then converting word recognized to facial animation. The project is still in progress and some new interesting methods are described in the current report.

  4. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  5. Speech and language pathology & pediatric HIV.

    Science.gov (United States)

    Retzlaff, C

    1999-12-01

    Children with HIV have critical speech and language issues because the virus manifests itself primarily in the developing central nervous system, sometimes causing speech, motor control, and language disabilities. Language impediments that develop during the second year of life seem to be especially severe. HIV-infected children are also susceptible to recurrent ear infections, which can damage hearing. Developmental issues must be addressed for these children to reach their full potential. A decline in language skills may coincide with or precede other losses in cognitive ability. A speech pathologist can play an important role on a pediatric HIV team. References are included.

  6. Ramathibodi Language Development Questionnaire: A Newly Developed Screening Tool for Detection of Delayed Language Development in Children Aged 18-30 Months.

    Science.gov (United States)

    Chuthapisith, Jariya; Wantanakorn, Pornchanok; Roongpraiwan, Rawiwan

    2015-08-01

    To develop a parental questionnaire for screening children with delayed language development in primary care settings. Ramathibodi Language Development (RLD) questionnaire was developed and completed by groups of 40 typically developing children age 18 to 30 months old and 30 children with delayed language development. The mean score was significantly lower in the delay language group (6.7 ± 1.9), comparing with the typically developing group (9.6 ± 0.7). The optimal ROC curve cut-off score was 8 with corresponding sensitivity and specificity were 98% and 72%, respectively. The corresponding area under the curve was 0.96 (95% CI = 0.92-0.99). The RLD questionnaire was the promising language developmental screening instrument that easily utilized in well-child examination settings.

  7. Stuttering Frequency, Speech Rate, Speech Naturalness, and Speech Effort During the Production of Voluntary Stuttering.

    Science.gov (United States)

    Davidow, Jason H; Grossman, Heather L; Edge, Robin L

    2018-05-01

    Voluntary stuttering techniques involve persons who stutter purposefully interjecting disfluencies into their speech. Little research has been conducted on the impact of these techniques on the speech pattern of persons who stutter. The present study examined whether changes in the frequency of voluntary stuttering accompanied changes in stuttering frequency, articulation rate, speech naturalness, and speech effort. In total, 12 persons who stutter aged 16-34 years participated. Participants read four 300-syllable passages during a control condition, and three voluntary stuttering conditions that involved attempting to produce purposeful, tension-free repetitions of initial sounds or syllables of a word for two or more repetitions (i.e., bouncing). The three voluntary stuttering conditions included bouncing on 5%, 10%, and 15% of syllables read. Friedman tests and follow-up Wilcoxon signed ranks tests were conducted for the statistical analyses. Stuttering frequency, articulation rate, and speech naturalness were significantly different between the voluntary stuttering conditions. Speech effort did not differ between the voluntary stuttering conditions. Stuttering frequency was significantly lower during the three voluntary stuttering conditions compared to the control condition, and speech effort was significantly lower during two of the three voluntary stuttering conditions compared to the control condition. Due to changes in articulation rate across the voluntary stuttering conditions, it is difficult to conclude, as has been suggested previously, that voluntary stuttering is the reason for stuttering reductions found when using voluntary stuttering techniques. Additionally, future investigations should examine different types of voluntary stuttering over an extended period of time to determine their impact on stuttering frequency, speech rate, speech naturalness, and speech effort.

  8. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  10. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  11. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  12. Impact of Placement Type on the Development of Clinical Competency in Speech-Language Pathology Students

    Science.gov (United States)

    Sheepway, Lyndal; Lincoln, Michelle; McAllister, Sue

    2014-01-01

    Background: Speech-language pathology students gain experience and clinical competency through clinical education placements. However, currently little empirical information exists regarding how competency develops. Existing research about the effectiveness of placement types and models in developing competency is generally descriptive and based…

  13. Differential modulation of auditory responses to attended and unattended speech in different listening conditions.

    Science.gov (United States)

    Kong, Ying-Yee; Mullangi, Ala; Ding, Nai

    2014-10-01

    This study investigates how top-down attention modulates neural tracking of the speech envelope in different listening conditions. In the quiet conditions, a single speech stream was presented and the subjects paid attention to the speech stream (active listening) or watched a silent movie instead (passive listening). In the competing speaker (CS) conditions, two speakers of opposite genders were presented diotically. Ongoing electroencephalographic (EEG) responses were measured in each condition and cross-correlated with the speech envelope of each speaker at different time lags. In quiet, active and passive listening resulted in similar neural responses to the speech envelope. In the CS conditions, however, the shape of the cross-correlation function was remarkably different between the attended and unattended speech. The cross-correlation with the attended speech showed stronger N1 and P2 responses but a weaker P1 response compared to the cross-correlation with the unattended speech. Furthermore, the N1 response to the attended speech in the CS condition was enhanced and delayed compared with the active listening condition in quiet, while the P2 response to the unattended speaker in the CS condition was attenuated compared with the passive listening in quiet. Taken together, these results demonstrate that top-down attention differentially modulates envelope-tracking neural activity at different time lags and suggest that top-down attention can both enhance the neural responses to the attended sound stream and suppress the responses to the unattended sound stream. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  15. Research Paper: Production of A Protocol on Early Intervention for Speech and Language Delays in Early Childhood: An Novice Experience in Iran

    Directory of Open Access Journals (Sweden)

    Roshanak Vameghi

    2016-01-01

    Results The result of this study is presented as 7 intervention packages, including the following domains of disorders: prelingual lingual speech and language hearing impairment, speech sound, dysphagia, stuttering, and dysarthria  Conclusion Most studies have confirmed the effectiveness and need for early interventions for children with speech and language impairment. However, most do not explain the details of these interventions. Before the present study, no systematic and evidence-based protocol existed for early intervention in childhood speech and language impairments, in Iran; and due to language differences, as well as possible differences in the speech and language developmental process of children of different communities, making direct use of non-Persian references was not possible and effective. Thus, there was a clear demand for the production of such a protocol.

  16. A Development of a System Enables Character Input and PC Operation via Voice for a Physically Disabled Person with a Speech Impediment

    Science.gov (United States)

    Tanioka, Toshimasa; Egashira, Hiroyuki; Takata, Mayumi; Okazaki, Yasuhisa; Watanabe, Kenzi; Kondo, Hiroki

    We have designed and implemented a PC operation support system for a physically disabled person with a speech impediment via voice. Voice operation is an effective method for a physically disabled person with involuntary movement of the limbs and the head. We have applied a commercial speech recognition engine to develop our system for practical purposes. Adoption of a commercial engine reduces development cost and will contribute to make our system useful to another speech impediment people. We have customized commercial speech recognition engine so that it can recognize the utterance of a person with a speech impediment. We have restricted the words that the recognition engine recognizes and separated a target words from similar words in pronunciation to avoid misrecognition. Huge number of words registered in commercial speech recognition engines cause frequent misrecognition for speech impediments' utterance, because their utterance is not clear and unstable. We have solved this problem by narrowing the choice of input down in a small number and also by registering their ambiguous pronunciations in addition to the original ones. To realize all character inputs and all PC operation with a small number of words, we have designed multiple input modes with categorized dictionaries and have introduced two-step input in each mode except numeral input to enable correct operation with small number of words. The system we have developed is in practical level. The first author of this paper is physically disabled with a speech impediment. He has been able not only character input into PC but also to operate Windows system smoothly by using this system. He uses this system in his daily life. This paper is written by him with this system. At present, the speech recognition is customized to him. It is, however, possible to customize for other users by changing words and registering new pronunciation according to each user's utterance.

  17. Speech Pathology in Ancient India--A Review of Sanskrit Literature.

    Science.gov (United States)

    Savithri, S. R.

    1987-01-01

    The paper is a review of ancient Sanskrit literature for information on the origin and development of speech and language, speech production, normality of speech and language, and disorders of speech and language and their treatment. (DB)

  18. HMM Adaptation for child speech synthesis

    CSIR Research Space (South Africa)

    Govender, Avashna

    2015-09-01

    Full Text Available Hidden Markov Model (HMM)-based synthesis in combination with speaker adaptation has proven to be an approach that is well-suited for child speech synthesis. This paper describes the development and evaluation of different HMM-based child speech...

  19. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  20. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  1. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  2. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  3. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  4. Effective Oral Language Development Strategies for Elementary Teachers

    Science.gov (United States)

    Kohler, Karen L.

    2016-01-01

    This action research study explored first and second grade classroom teachers' knowledge of oral language development and interventions for students at-risk of an oral language delay. This was accomplished through collaboration between a speech-language pathologist and classroom teachers. The data was aligned with assessments, the Response to…

  5. High-performance speech recognition using consistency modeling

    Science.gov (United States)

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  6. Didactic speech synthesizer – acoustic module, formants model

    OpenAIRE

    Teixeira, João Paulo; Fernandes, Anildo

    2013-01-01

    Text-to-speech synthesis is the main subject treated in this work. It will be presented the constitution of a generic text-to-speech system conversion, explained the functions of the various modules and described the development techniques using the formants model. The development of a didactic formant synthesiser under Matlab environment will also be described. This didactic synthesiser is intended for a didactic understanding of the formant model of speech production.

  7. Vocal effort modulates the motor planning of short speech structures

    Science.gov (United States)

    Taitz, Alan; Shalom, Diego E.; Trevisan, Marcos A.

    2018-05-01

    Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.

  8. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech

    Science.gov (United States)

    Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-01-01

    A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production

  9. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same

  10. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results

  11. Song and speech: examining the link between singing talent and speech imitation ability.

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory.

  12. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  13. Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2013-01-01

    This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding. PMID:23801980

  14. Automated delay estimation at signalized intersections : phase I concept and algorithm development.

    Science.gov (United States)

    2011-07-01

    Currently there are several methods to measure the performance of surface streets, but their capabilities in dynamically estimating vehicle delay are limited. The objective of this research is to develop a method to automate traffic delay estimation ...

  15. 12p13.33 microdeletion including ELKS/ERC1, a new locus associated with childhood apraxia of speech.

    Science.gov (United States)

    Thevenon, Julien; Callier, Patrick; Andrieux, Joris; Delobel, Bruno; David, Albert; Sukno, Sylvie; Minot, Delphine; Mosca Anne, Laure; Marle, Nathalie; Sanlaville, Damien; Bonnet, Marlène; Masurel-Paulet, Alice; Levy, Fabienne; Gaunt, Lorraine; Farrell, Sandra; Le Caignec, Cédric; Toutain, Annick; Carmignac, Virginie; Mugneret, Francine; Clayton-Smith, Jill; Thauvin-Robinet, Christel; Faivre, Laurence

    2013-01-01

    Speech sound disorders are heterogeneous conditions, and sporadic and familial cases have been described. However, monogenic inheritance explains only a small proportion of such disorders, in particular in cases with childhood apraxia of speech (CAS). Deletions of Speech delay was found in all patients, which could be defined as CAS when patients had been evaluated by a speech therapist (5/9 patients). Intellectual deficiency was found in 5/9 patients only, and often associated with psychiatric manifestations of various severity. Two such deletions were inherited from an apparently healthy parent, but reevaluation revealed abnormal speech production at least in childhood, suggesting variable expressivity. The ELKS/ERC1 gene, which encodes for a synaptic factor, is found in the smallest region of overlap. These results reinforce the hypothesis that deletions of the 12p13.33 locus may be responsible for variable phenotypes including CAS associated with neurobehavioural troubles and that the presence of CAS justifies a genetic work-up.

  16. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  17. Speech production accuracy and variability in young cochlear implant recipients: comparisons with typically developing age-peers.

    Science.gov (United States)

    Ertmer, David J; Goffman, Lisa

    2011-02-01

    The speech production accuracy and variability scores of 6 young cochlear implant (CI) recipients with 2 years of device experience were compared with those of typically developing (TD) age-peers. Words from the First Words Speech Test (FWST; Ertmer, 1999) were imitated 3 times to assess the accuracy and variability of initial consonants, vowels, and words. The initial consonants in the 4 sets of the FWST followed a typical order of development. The TD group produced targets with high accuracy and low variability. Their scores across FWST sets reflected the expected order of development. The CI group produced most targets less accurately and with more variability than the TD children. Relatively high accuracy for the consonants of Sets 1 and 2 indicated that these phonemes were acquired early and in a typical developmental order. A trend toward greater accuracy for Set 4 as compared with Set 3 suggested that later-emerging consonants were not acquired in the expected order. Variability was greatest for later-emerging initial consonants and whole words. Although considerable speech production proficiency was evident, age-level performance was not attained after 2 years of CI experience. Factors that might influence the order of consonant acquisition are discussed.

  18. The effects of perceived USB-delay for sensor and embedded system development.

    Science.gov (United States)

    Du, J; Kade, D; Gerdtman, C; Ozcan, O; Linden, M

    2016-08-01

    Perceiving delay in computer input devices is a problem which gets even more eminent when being used in healthcare applications and/or in small, embedded systems. Therefore, the amount of delay found as acceptable when using computer input devices was investigated in this paper. A device was developed to perform a benchmark test for the perception of delay. The delay can be set from 0 to 999 milliseconds (ms) between a receiving computer and an available USB-device. The USB-device can be a mouse, a keyboard or some other type of USB-connected input device. Feedback from performed user tests with 36 people form the basis for the determination of time limitations for the USB data processing in microprocessors and embedded systems without users' noticing the delay. For this paper, tests were performed with a personal computer and a common computer mouse, testing the perception of delays between 0 and 500 ms. The results of our user tests show that perceived delays up to 150 ms were acceptable and delays larger than 300 ms were not acceptable at all.

  19. Delay Estimator and Improved Proportionate Multi-Delay Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    E. Verteletskaya

    2012-04-01

    Full Text Available This paper pertains to speech and acoustic signal processing, and particularly to a determination of echo path delay and operation of echo cancellers. To cancel long echoes, the number of weights in a conventional adaptive filter must be large. The length of the adaptive filter will directly affect both the degree of accuracy and the convergence speed of the adaptation process. We present a new adaptive structure which is capable to deal with multiple dispersive echo paths. An adaptive filter according to the present invention includes means for storing an impulse response in a memory, the impulse response being indicative of the characteristics of a transmission line. It also includes a delay estimator for detecting ranges of samples within the impulse response having relatively large distribution of echo energy. These ranges of samples are being indicative of echoes on the transmission line. An adaptive filter has a plurality of weighted taps, each of the weighted taps having an associated tap weight value. A tap allocation/control circuit establishes the tap weight values in response to said detecting means so that only taps within the regions of relatively large distributions of echo energy are turned on. Thus, the convergence speed and the degree of estimation in the adaptation process can be improved.

  20. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  1. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  2. The role of stress and accent in the perception of speech rhythm

    NARCIS (Netherlands)

    Grover, C.N.; Terken, J.M.B.

    1995-01-01

    Modelling rhythmic characteristics of speech is expected to contribute to the acceptability of synthetic speech. However, before rules for the control of speech rhythm in synthetic speech can be developed, we need to know which properties of speech give rise to the perception of speech rhythm. An

  3. Phonemic Characteristics of Apraxia of Speech Resulting from Subcortical Hemorrhage

    Science.gov (United States)

    Peach, Richard K.; Tonkovich, John D.

    2004-01-01

    Reports describing subcortical apraxia of speech (AOS) have received little consideration in the development of recent speech processing models because the speech characteristics of patients with this diagnosis have not been described precisely. We describe a case of AOS with aphasia secondary to basal ganglia hemorrhage. Speech-language symptoms…

  4. Towards the Development of a Mexican Speech-to-Sign-Language Translator for the Deaf Community

    Directory of Open Access Journals (Sweden)

    Santiago-Omar Caballero-Morales

    2012-03-01

    Full Text Available Una parte significativa de la población mexicana es sorda. Esta discapacidad restringe sus habilidades de interacción social con personas que no tienen dicha discapacidad y viceversa. En este artículo presentamos nuestros avances hacia el desarrollo de un traductor Voz-a-Lenguaje-de-Señas del español mexicano para asistir a personas sin discapacidad a interactuarcon personas sordas. La metodología de diseño propuesta considera limitados recursos para(1 el desarrollo del Reconocedor Automático del Habla (RAH mexicano, el cual es el módulo principal del traductor, y (2 el vocabulario del Lenguaje de Señas Mexicano (LSM disponible para representar las oraciones reconocidas. La traducción Voz-a-Lenguaje-de-Señas fue lograda con un nivel de precisión mayor al 97% para usuarios de prueba diferentes de aquellos seleccionados para el entrenamiento del RAH.A significant population of Mexican people are deaf. This disorder restricts their social interac-tion skills with people who don't have such disorder and viceversa. In this paper we presentour advances towards the development of a Mexican Speech-to-Sign-Language translator toassist normal people to interact with deaf people. The proposed design methodology considerslimited resources for (1 the development of the Mexican Automatic Speech Recogniser (ASRsystem, which is the main module in the translator, and (2 the Mexican Sign Language(MSL vocabulary available to represent the decoded speech. Speech-to-MSL translation wasaccomplished with an accuracy level over 97% for test speakers different from those selectedfor ASR training.

  5. Study of accent-based music speech protocol development for improving voice problems in stroke patients with mixed dysarthria.

    Science.gov (United States)

    Kim, Soo Ji; Jo, Uiri

    2013-01-01

    Based on the anatomical and functional commonality between singing and speech, various types of musical elements have been employed in music therapy research for speech rehabilitation. This study was to develop an accent-based music speech protocol to address voice problems of stroke patients with mixed dysarthria. Subjects were 6 stroke patients with mixed dysarthria and they received individual music therapy sessions. Each session was conducted for 30 minutes and 12 sessions including pre- and post-test were administered for each patient. For examining the protocol efficacy, the measures of maximum phonation time (MPT), fundamental frequency (F0), average intensity (dB), jitter, shimmer, noise to harmonics ratio (NHR), and diadochokinesis (DDK) were compared between pre and post-test and analyzed with a paired sample t-test. The results showed that the measures of MPT, F0, dB, and sequential motion rates (SMR) were significantly increased after administering the protocol. Also, there were statistically significant differences in the measures of shimmer, and alternating motion rates (AMR) of the syllable /K$\\inve$/ between pre- and post-test. The results indicated that the accent-based music speech protocol may improve speech motor coordination including respiration, phonation, articulation, resonance, and prosody of patients with dysarthria. This suggests the possibility of utilizing the music speech protocol to maximize immediate treatment effects in the course of a long-term treatment for patients with dysarthria.

  6. HAMLET treatment delays bladder cancer development.

    Science.gov (United States)

    Mossberg, Ann-Kristin; Hou, Yuchuan; Svensson, Majlis; Holmqvist, Bo; Svanborg, Catharina

    2010-04-01

    HAMLET is a protein-lipid complex that kills different types of cancer cells. Recently we observed a rapid reduction in human bladder cancer size after intravesical HAMLET treatment. In this study we evaluated the therapeutic effect of HAMLET in the mouse MB49 bladder carcinoma model. Bladder tumors were established by intravesical injection of MB49 cells into poly L-lysine treated bladders of C57BL/6 mice. Treatment groups received repeat intravesical HAMLET instillations and controls received alpha-lactalbumin or phosphate buffer. Effects of HAMLET on tumor size and putative apoptotic effects were analyzed in bladder tissue sections. Whole body imaging was used to study HAMLET distribution in tumor bearing mice compared to healthy bladder tissue. HAMLET caused a dose dependent decrease in MB49 cell viability in vitro. Five intravesical HAMLET instillations significantly decreased tumor size and delayed development in vivo compared to controls. TUNEL staining revealed selective apoptotic effects in tumor areas but not in adjacent healthy bladder tissue. On in vivo imaging Alexa-HAMLET was retained for more than 24 hours in the bladder of tumor bearing mice but not in tumor-free bladders or in tumor bearing mice that received Alexa-alpha-lactalbumin. Results show that HAMLET is active as a tumoricidal agent and suggest that topical HAMLET administration may delay bladder cancer development. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  7. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...... on this model. The basic model used in this thesis is the harmonic model which is a commonly used model for describing the voiced part of the speech signal. We show that it can be beneficial to extend the model to take inharmonicities or the non-stationarity of speech into account. Extending the model...

  8. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  9. Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.

    Science.gov (United States)

    Schoenmaker, Esther; van de Par, Steven

    2016-01-01

    Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.

  10. Reading Skills of Students with Speech Sound Disorders at Three Stages of Literacy Development

    Science.gov (United States)

    Skebo, Crysten M.; Lewis, Barbara A.; Freebairn, Lisa A.; Tag, Jessica; Ciesla, Allison Avrich; Stein, Catherine M.

    2013-01-01

    Purpose: The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with…

  11. Speech Function and Speech Role in Carl Fredricksen's Dialogue on Up Movie

    OpenAIRE

    Rehana, Ridha; Silitonga, Sortha

    2013-01-01

    One aim of this article is to show through a concrete example how speech function and speech role used in movie. The illustrative example is taken from the dialogue of Up movie. Central to the analysis proper form of dialogue on Up movie that contain of speech function and speech role; i.e. statement, offer, question, command, giving, and demanding. 269 dialogue were interpreted by actor, and it was found that the use of speech function and speech role.

  12. Developmental profile of speech-language and communicative functions in an individual with the preserved speech variant of Rett syndrome.

    Science.gov (United States)

    Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2014-08-01

    We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.

  13. Speech detection in noise and spatial unmasking in children with simultaneous versus sequential bilateral cochlear implants.

    Science.gov (United States)

    Chadha, Neil K; Papsin, Blake C; Jiwani, Salima; Gordon, Karen A

    2011-09-01

    To measure speech detection in noise performance for children with bilateral cochlear implants (BiCI), to compare performance in children with simultaneous implant versus those with sequential implant, and to compare performance to normal-hearing children. Prospective cohort study. Tertiary academic pediatric center. Children with early-onset bilateral deafness and 2-year BiCI experience, comprising the "sequential" group (>2 yr interimplantation delay, n = 12) and "simultaneous group" (no interimplantation delay, n = 10) and normal-hearing controls (n = 8). Thresholds to speech detection (at 0-degree azimuth) were measured with noise at 0-degree azimuth or ± 90-degree azimuth. Spatial unmasking (SU) as the noise condition changed from 0-degree azimuth to ± 90-degree azimuth and binaural summation advantage (BSA) of 2 over 1 CI. Speech detection in noise was significantly poorer than controls for both BiCI groups (p simultaneous group approached levels found in normal controls (7.2 ± 0.6 versus 8.6 ± 0.6 dB, p > 0.05) and was significantly better than that in the sequential group (3.9 ± 0.4 dB, p simultaneous group but, in the sequential group, was significantly better when noise was moved to the second rather than the first implanted ear (4.8 ± 0.5 versus 3.0 ± 0.4 dB, p sequential group's second rather than first CI. Children with simultaneously implanted BiCI demonstrated an advantage over children with sequential implant by using spatial cues to improve speech detection in noise.

  14. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  15. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  16. Song and speech: examining the link between singing talent and speech imitation ability

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M.

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of “speech” on the productive level and “music” on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory. PMID:24319438

  17. Dysfluencies in the speech of adults with intellectual disabilities and reported speech difficulties.

    Science.gov (United States)

    Coppens-Hofman, Marjolein C; Terband, Hayo R; Maassen, Ben A M; van Schrojenstein Lantman-De Valk, Henny M J; van Zaalen-op't Hof, Yvonne; Snik, Ad F M

    2013-01-01

    In individuals with an intellectual disability, speech dysfluencies are more common than in the general population. In clinical practice, these fluency disorders are generally diagnosed and treated as stuttering rather than cluttering. To characterise the type of dysfluencies in adults with intellectual disabilities and reported speech difficulties with an emphasis on manifestations of stuttering and cluttering, which distinction is to help optimise treatment aimed at improving fluency and intelligibility. The dysfluencies in the spontaneous speech of 28 adults (18-40 years; 16 men) with mild and moderate intellectual disabilities (IQs 40-70), who were characterised as poorly intelligible by their caregivers, were analysed using the speech norms for typically developing adults and children. The speakers were subsequently assigned to different diagnostic categories by relating their resulting dysfluency profiles to mean articulatory rate and articulatory rate variability. Twenty-two (75%) of the participants showed clinically significant dysfluencies, of which 21% were classified as cluttering, 29% as cluttering-stuttering and 25% as clear cluttering at normal articulatory rate. The characteristic pattern of stuttering did not occur. The dysfluencies in the speech of adults with intellectual disabilities and poor intelligibility show patterns that are specific for this population. Together, the results suggest that in this specific group of dysfluent speakers interventions should be aimed at cluttering rather than stuttering. The reader will be able to (1) describe patterns of dysfluencies in the speech of adults with intellectual disabilities that are specific for this group of people, (2) explain that a high rate of dysfluencies in speech is potentially a major determiner of poor intelligibility in adults with ID and (3) describe suggestions for intervention focusing on cluttering rather than stuttering in dysfluent speakers with ID. Copyright © 2013 Elsevier Inc

  18. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  19. Acquisition of speech rhythm in first language.

    Science.gov (United States)

    Polyanskaya, Leona; Ordin, Mikhail

    2015-09-01

    Analysis of English rhythm in speech produced by children and adults revealed that speech rhythm becomes increasingly more stress-timed as language acquisition progresses. Children reach the adult-like target by 11 to 12 years. The employed speech elicitation paradigm ensured that the sentences produced by adults and children at different ages were comparable in terms of lexical content, segmental composition, and phonotactic complexity. Detected differences between child and adult rhythm and between rhythm in child speech at various ages cannot be attributed to acquisition of phonotactic language features or vocabulary, and indicate the development of language-specific phonetic timing in the course of acquisition.

  20. Syntactic error modeling and scoring normalization in speech recognition: Error modeling and scoring normalization in the speech recognition task for adult literacy training

    Science.gov (United States)

    Olorenshaw, Lex; Trawick, David

    1991-01-01

    The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.

  1. Speech disorders - children

    Science.gov (United States)

    ... disorder; Voice disorders; Vocal disorders; Disfluency; Communication disorder - speech disorder; Speech disorder - stuttering ... evaluation tools that can help identify and diagnose speech disorders: Denver Articulation Screening Examination Goldman-Fristoe Test of ...

  2. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  3. Talk in Blended-Space Speech Communities: An Exploration of Discursive Practices of a Professional Development Group

    Science.gov (United States)

    Garvin, Tabitha Ann

    2011-01-01

    This study is an exploration of alternative teacher professional development. While using symbolic interactionism for a research lens, it characterizes the discursive practices commonly found in formal, informal, and blended-space speech communities based on the talk within a leadership-development program comprised of five female, church-based…

  4. Dual silent communication system development based on subvocal speech and Raspberry Pi

    Directory of Open Access Journals (Sweden)

    José Daniel Ramírez-Corzo

    2016-09-01

    Additionally, in this article we show the speech subvocal signals’ recording system realization. The average accuracy percentage was 72.5 %, and includes a total of 50 words by class, this is 200 signals. Finally, it demonstrated that using the Raspberry Pi it is possible to set a silent communication system, using subvocal. speech signals.

  5. The Effect of Picture Exchange Communication System and Speech Therapy on Communication Development of 4-8 Years Old Autistic Children

    Directory of Open Access Journals (Sweden)

    Zahra Pour-Ismaili

    2011-01-01

    Full Text Available Objective: This study compares the effect of speech therapy and picture exchange communication system (PECS on communication development of 4-8 year old autistic children. Materials & Methods: This is an experimental and comparison study. In this study 10 subjects including PECS and speech therapy groups were selected using the available sampling method on the base of including and excluding criteria. Both groups were matched according to age and developmental indices of Niusha scale. Dependent variables were listening, receptive language, expressive language, cognition, speech and social communication. Intervention was applied for both groups similarly divided in 40 minute sessions 3 times a weak for 3 months. Our parameters were evaluated by Niusha development scale before and after interventions. The results were analyzed clinically and statistically by sum-ranks willkokson and rank- signed willkokson. Results: Post test comparison between the two groups revealed that the members of PECS group had a more progress in listening, receptive language, cognition and social communication skills rather than speech therapy group. But these differences were not significant statistically and T(sum ranks was between critical values. Conclusion: considering the results, it could be concluded that PECS is a effective strategy to train non-verbal autistic children. Moreover it could be used as a supplement teaching method beside other therapeutic method such as speech therapy.

  6. Lexical competition in nonnative speech comprehension.

    Science.gov (United States)

    FitzPatrick, Ian; Indefrey, Peter

    2010-06-01

    Electrophysiological studies consistently find N400 effects of semantic incongruity in nonnative (L2) language comprehension. These N400 effects are often delayed compared with native (L1) comprehension, suggesting that semantic integration in one's second language occurs later than in one's first language. In this study, we investigated whether such a delay could be attributed to (1) intralingual lexical competition and/or (2) interlingual lexical competition. We recorded EEG from Dutch-English bilinguals who listened to English (L2) sentences in which the sentence-final word was (a) semantically fitting and (b) semantically incongruent or semantically incongruent but initially congruent due to sharing initial phonemes with (c) the most probable sentence completion within the L2 or (d) the L1 translation equivalent of the most probable sentence completion. We found an N400 effect in each of the semantically incongruent conditions. This N400 effect was significantly delayed to L2 words but not to L1 translation equivalents that were initially congruent with the sentence context. Taken together, these findings firstly demonstrate that semantic integration in nonnative listening can start based on word initial phonemes (i.e., before a single lexical candidate could have been selected based on the input) and secondly suggest that spuriously elicited L1 lexical candidates are not available for semantic integration in L2 speech comprehension.

  7. The NCHLT speech corpus of the South African languages

    CSIR Research Space (South Africa)

    Barnard, E

    2014-05-01

    Full Text Available The NCHLT speech corpus contains wide-band speech from approximately 200 speakers per language, in each of the eleven of cial languages of South Africa. We describe the design and development processes that were undertaken in order to develop...

  8. Development and validation of a parent-report measure for detection of cognitive delay in infancy.

    Science.gov (United States)

    Schafer, Graham; Genesoni, Lucia; Boden, Greg; Doll, Helen; Jones, Rosamond A K; Gray, Ron; Adams, Eleri; Jefferson, Ros

    2014-12-01

    To develop a brief, parent-completed instrument (ERIC - Early Report by Infant Caregivers) for detection of cognitive delay in 10- to 24-month-olds born preterm, or of low birthweight, or with perinatal complications, and to establish ERIC's diagnostic properties. Scores for ERIC were collected from the parents of 317 children meeting ≥inclusion criterion (birthweight Toddler Development-III cognitive scale. Items were retained according to their individual associations with delay. Sensitivity, specificity, and positive and negative predictive values were estimated and a truncated ERIC was developed for use in children cognitive delay in 10- to 24-month-old preterm infants and as a screen for cognitive delay. © 2014 Mac Keith Press.

  9. Home-based Early Intervention on Auditory and Speech Development in Mandarin-speaking Deaf Infants and Toddlers with Chronological Aged 7–24 Months

    Directory of Open Access Journals (Sweden)

    Ying Yang

    2015-01-01

    Conclusions: The data suggested the early hearing intervention and home-based habilitation benefit auditory and speech development. Chronological age and recovery time may be major factors for aural verbal outcomes in hearing impaired children. The development of auditory and speech in hearing impaired children may be relatively crucial in thefirst year's habilitation after fitted with the auxiliary device.

  10. Delayed embryonic development in the Indian short-nosed fruit bat, Cynopterus sphinx.

    Science.gov (United States)

    Meenakumari, Karukayil J; Krishna, Amitabh

    2005-01-01

    The unusual feature of the breeding cycle of Cynopterus sphinx at Varanasi is the significant variation in gestation length of the two successive pregnancies of the year. The aim of this study was to investigate whether the prolongation of the first pregnancy in C. sphinx is due to delayed embryonic development. The first (winter) pregnancy commences in late October and lasts until late March and has a gestation period of about 150 days. The second (summer) pregnancy commences in April and lasts until the end of July or early August with a gestation period of about 125 days. Changes in the size and weight of uterine cornua during the two successive pregnancies suggest retarded embryonic growth during November and December. Histological analysis during the period of retarded embryonic development in November and December showed a slow gastrulation process. The process of amniogenesis was particularly slow. When the embryos attained the early primitive streak stage, their developmental rate suddenly increased considerably. During the summer pregnancy, on the other hand, the process of gastrulation was much faster and proceeded quickly. A comparison of the pattern of embryonic development for 4 consecutive years consistently showed retarded or delayed embryonic development during November and December. The time of parturition and post-partum oestrus showed only a limited variation from 1 year to another. This suggests that delayed embryonic development in C. sphinx may function to synchronize parturition among females. The period of delayed embryonic development in this species clearly coincides with the period of fat deposition. The significance of this correlation warrants further investigation.

  11. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    Science.gov (United States)

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014

  12. Comparison of Tc-99m ECD brain SPECT between patients with delayed development and cerebral palsy

    International Nuclear Information System (INIS)

    Cho, I.; Chun, K.; Won, K.; Lee, H.; Jang, S.; Lee, J.

    2002-01-01

    Purpose: In previous study, thalamic or cerebellar hypoperfusion were reported in patients with cerebral palsy. This study was performed to evaluate cerebral perfusion abnormalities using Tc-99m ECD brain SPECT in patients with delayed motor development. Methods: Nineteen patients (9 boys, 10 girls, mean age 25.5 months) with delayed development underwent brain SPECT after injection of 185∼370 MBq of Tc-99m ECD. The imaging was obtained between 30 minutes and 1hr after injection. The patients were divided clinically as follows, patients with delayed development (n=5) and patients with cerebral palsy (n=14) who has delayed development and abnormal movement. The clinical subtypes of cerebral palsy were spastic quadriplegia (n=5), spastic diplegia (n=6) and spastic hemiplegia (n=3). In each group, decrease of cerebral perfusion was evaluated visually as mild, moderate and severe and quantitation of cerebral perfusion after Lassen's correction was also obtained. Results: SPECT findings showed normal or mildly decreased thalamic perfusion in patients with delayed development and severe decrease of thalamic or cerebellar perfusion in patients with spastic quadriplegia. In patients with spastic diplegia, mild decrease of perfusion was observed in thalamus. In quantified data, thalamic perfusion was lowest in patients with spastic quadriplegia and highest in patients with delayed development, but there were no statistically significant differences. Conclusion: Brain SPECT with Tc-99m ECD has a role in the detection of perfusion abnormalities in patients with delayed development and cerebral palsy

  13. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  14. Teaching Picture Naming to Two Adolescents with Autism Spectrum Disorders Using Systematic Instruction and Speech-Generating Devices

    Science.gov (United States)

    Kagohara, Debora M.; van der Meer, Larah; Achmadi, Donna; Green, Vanessa A.; O'Reilly, Mark F.; Lancioni, Giulio E.; Sutherland, Dean; Lang, Russell; Marschik, Peter B.; Sigafoos, Jeff

    2012-01-01

    We evaluated an intervention aimed at teaching two adolescents with autism spectrum disorders (ASDs) to name pictures using speech-generating devices (SGDs). The effects of intervention were evaluated in two studies using multiple-probe across participants designs. Intervention--consisting of time delay, least-to-most prompting, and differential…

  15. Preschoolers Benefit from Visually Salient Speech Cues

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2015-01-01

    Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…

  16. Speech, language and swallowing in Huntington’ Disease

    Directory of Open Access Journals (Sweden)

    Maryluz Camargo-Mendoza

    2017-04-01

    Full Text Available Huntington’s disease (HD has been described as a genetic condition caused by a mutation in the CAG (cytosine-adenine-guanine nucleotide sequence. Depending on the stage of the disease, people may have difficulties in speech, language and swallowing. The purpose of this paper is to describe these difficulties in detail, as well as to provide an account on speech and language therapy approach to this condition. Regarding speech, it is worth noticing that characteristics typical of hyperkinetic dysarthria can be found due to underlying choreic movements. The speech of people with HD tends to show shorter sentences, with much simpler syntactic structures, and difficulties in tasks that require complex cognitive processing. Moreover, swallowing may present dysphagia that progresses as the disease develops. A timely, comprehensive and effective speech-language intervention is essential to improve the quality of life of people and contribute to their communicative welfare.

  17. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  18. Infants' brain responses to speech suggest analysis by synthesis.

    Science.gov (United States)

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-05

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  19. Tampa Bay International Business Summit Keynote Speech

    Science.gov (United States)

    Clary, Christina

    2011-01-01

    A keynote speech outlining the importance of collaboration and diversity in the workplace. The 20-minute speech describes NASA's challenges and accomplishments over the years and what lies ahead. Topics include: diversity and inclusion principles, international cooperation, Kennedy Space Center planning and development, opportunities for cooperation, and NASA's vision for exploration.

  20. Current Policies and New Directions for Speech-Language Pathology Assistants.

    Science.gov (United States)

    Paul-Brown, Diane; Goldberg, Lynette R

    2001-01-01

    This article provides an overview of current American Speech-Language-Hearing Association (ASHA) policies for the appropriate use and supervision of speech-language pathology assistants with an emphasis on the need to preserve the role of fully qualified speech-language pathologists in the service delivery system. Seven challenging issues surrounding the appropriate use of speech-language pathology assistants are considered. These include registering assistants and approving training programs; membership in ASHA; discrepancies between state requirements and ASHA policies; preparation for serving diverse multicultural, bilingual, and international populations; supervision considerations; funding and reimbursement for assistants; and perspectives on career-ladder/bachelor-level personnel. The formation of a National Leadership Council is proposed to develop a coordinated strategic plan for addressing these controversial and potentially divisive issues related to speech-language pathology assistants. This council would implement strategies for future development in the areas of professional education pertaining to assistant-level supervision, instruction of assistants, communication networks, policy development, research, and the dissemination/promotion of information regarding assistants.

  1. Speech recognition systems on the Cell Broadband Engine

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Jones, H; Vaidya, S; Perrone, M; Tydlitat, B; Nanda, A

    2007-04-20

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousands of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.

  2. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  3. Priorities of Dialogic Speech Teaching Methodology at Higher Non-Linguistic School

    Directory of Open Access Journals (Sweden)

    Vida Asanavičienė

    2011-04-01

    Full Text Available The article deals with a number of relevant methodological issues. First of all, the author analyses psychological peculiarities of dialogic speech and states that the dialogue is the product of at least two persons. Therefore, in this view, dialogic speech, unlike monologic speech, happens impromptu and is not prepared in advance. Dialogic speech is mainly of situational character. The linguistic nature of dialogic speech, in the author’s opinion, lies in the process of exchanging replications, which are coherent in structural and functional character. The author classifies dialogue groups by the number of replications and communicative parameters. The basic goal of dialogic speech teaching is developing the abilities and skills which enable to exchange replications. The author distinguishes two basic stages of dialogic speech teaching: 1. Training of abilities to exchange replications during communicative exercises. 2. Development of skills by training the capability to perform exercises of creative nature during a group dialogue, conversation or debate.

  4. Oral Health Characteristics and Dental Rehabilitation of Children with Global Developmental Delay

    Directory of Open Access Journals (Sweden)

    Saurabh Kumar

    2017-01-01

    Full Text Available Global developmental delay (GDD is a chronic neurological disturbance which includes defects in one or more developmental domains. The developmental domain can be motor, cognitive, daily activities, speech or language, and social or personal development. The etiology for GDD can be prenatal, perinatal, or postnatal. It can be diagnosed early in childhood as the delay or absence of one or more developmental milestones. Hence the role of pedodontist and pediatricians becomes more crucial in identifying this condition. The diagnosis of GDD requires a detailed history including family history and environmental risk factors followed by physical and neurological examinations. Investigations for GDD include diagnostic laboratory tests, brain imaging, and other evidence-based evaluations. GDD affects multiple developmental domains that not only have direct bearing on maintenance of oral health, but also require additional behavior management techniques to deliver optimal dental care. This paper describes two different spectra of children with GDD. Since the severity of GDD can vary, this paper also discusses the different behavior management techniques that were applied to provide dental treatment in such children.

  5. Oral Health Characteristics and Dental Rehabilitation of Children with Global Developmental Delay.

    Science.gov (United States)

    Kumar, Saurabh; Pai, Deepika; Saran, Runki

    2017-01-01

    Global developmental delay (GDD) is a chronic neurological disturbance which includes defects in one or more developmental domains. The developmental domain can be motor, cognitive, daily activities, speech or language, and social or personal development. The etiology for GDD can be prenatal, perinatal, or postnatal. It can be diagnosed early in childhood as the delay or absence of one or more developmental milestones. Hence the role of pedodontist and pediatricians becomes more crucial in identifying this condition. The diagnosis of GDD requires a detailed history including family history and environmental risk factors followed by physical and neurological examinations. Investigations for GDD include diagnostic laboratory tests, brain imaging, and other evidence-based evaluations. GDD affects multiple developmental domains that not only have direct bearing on maintenance of oral health, but also require additional behavior management techniques to deliver optimal dental care. This paper describes two different spectra of children with GDD. Since the severity of GDD can vary, this paper also discusses the different behavior management techniques that were applied to provide dental treatment in such children.

  6. Comparative efficacy of the picture exchange communication system (PECS) versus a speech-generating device: effects on social-communicative skills and speech development.

    Science.gov (United States)

    Boesch, Miriam C; Wendt, Oliver; Subramanian, Anu; Hsu, Ning

    2013-09-01

    The Picture Exchange Communication System (PECS) and a speech-generating device (SGD) were compared in a study with a multiple baseline, alternating treatment design. The effectiveness of these methods in increasing social-communicative behavior and natural speech production were assessed with three elementary school-aged children with severe autism who demonstrated extremely limited functional communication skills. Results for social-communicative behavior were mixed for all participants in both treatment conditions. Relatively little difference was observed between PECS and SGD conditions. Although findings were inconclusive, data patterns suggest that Phase II of the PECS training protocol is conducive to encouraging social-communicative behavior. Data for speech outcomes did not reveal any increases across participants, and no differences between treatment conditions were observed.

  7. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation.

    Science.gov (United States)

    Gaudrain, Etienne; Carlyon, Robert P

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.

  8. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  9. Cortical oscillations and entrainment in speech processing during working memory load

    DEFF Research Database (Denmark)

    Hjortkjær, Jens; Märcher-Rørsted, Jonatan; Fuglsang, Søren A

    2018-01-01

    Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real-life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we...... developed an auditory n-back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n-back task....... The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n-back task on the speech sequences in different levels...

  10. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  11. CLEFT PALATE. FOUNDATIONS OF SPEECH PATHOLOGY SERIES.

    Science.gov (United States)

    RUTHERFORD, DAVID; WESTLAKE, HAROLD

    DESIGNED TO PROVIDE AN ESSENTIAL CORE OF INFORMATION, THIS BOOK TREATS NORMAL AND ABNORMAL DEVELOPMENT, STRUCTURE, AND FUNCTION OF THE LIPS AND PALATE AND THEIR RELATIONSHIPS TO CLEFT LIP AND CLEFT PALATE SPEECH. PROBLEMS OF PERSONAL AND SOCIAL ADJUSTMENT, HEARING, AND SPEECH IN CLEFT LIP OR CLEFT PALATE INDIVIDUALS ARE DISCUSSED. NASAL RESONANCE…

  12. Storytelling as an approach to evaluate the child's level of speech development

    Directory of Open Access Journals (Sweden)

    Ljubica Marjanovič Umek

    2004-05-01

    Full Text Available Both in developmental psychology and in linguistics, the child's storytelling is an interesting topic of research from the point of view of evaluating the child's level of speech development, especially of its pragmatic component, and from the point of view of teaching and learning in the preschool period. In the present study, children's storytelling in different situational contexts was analyzed and evaluated: with a picture book without any text, after listening to a text from a picture book, and after a suggested story beginning (ie., with the introductory sentence given to them. The sample included children of three age groups, approximately 4, 6 and 8 years; each age group had approximately the same numbers of boys and girls. A total of over 300 stories were collected, which were subsequently analyzed and evaluated using a set of story developmental level criteria. Two key criteria were used: story coherence and cohesion. Comparisons by age and gender, as well as by context of storytelling, show significant developmental differences in story content and structure for different age groups, and the important role of storytelling context. Differences in storytelling between boys and girls did not prove statistically significant. The findings also suggest new options and approaches for further stimulations of speech development within preschool and primary school curricula might be considered.

  13. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy ... most kids with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech ...

  14. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  15. Developmental apraxia of speech in children. Quantitive assessment of speech characteristics

    NARCIS (Netherlands)

    Thoonen, G.H.J.

    1998-01-01

    Developmental apraxia of speech (DAS) in children is a speech disorder, supposed to have a neurological origin, which is commonly considered to result from particular deficits in speech processing (i.e., phonological planning, motor programming). However, the label DAS has often been used as

  16. The Beginnings of Danish Speech Perception

    DEFF Research Database (Denmark)

    Østerbye, Torkil

    , in the light of the rich and complex Danish sound system. The first two studies report on native adults’ perception of Danish speech sounds in quiet and noise. The third study examined the development of language-specific perception in native Danish infants at 6, 9 and 12 months of age. The book points......Little is known about the perception of speech sounds by native Danish listeners. However, the Danish sound system differs in several interesting ways from the sound systems of other languages. For instance, Danish is characterized, among other features, by a rich vowel inventory and by different...... reductions of speech sounds evident in the pronunciation of the language. This book (originally a PhD thesis) consists of three studies based on the results of two experiments. The experiments were designed to provide knowledge of the perception of Danish speech sounds by Danish adults and infants...

  17. Inconsistency of speech in children with childhood apraxia of speech, phonological disorders, and typical speech

    Science.gov (United States)

    Iuzzini, Jenya

    There is a lack of agreement on the features used to differentiate Childhood Apraxia of Speech (CAS) from Phonological Disorders (PD). One criterion which has gained consensus is lexical inconsistency of speech (ASHA, 2007); however, no accepted measure of this feature has been defined. Although lexical assessment provides information about consistency of an item across repeated trials, it may not capture the magnitude of inconsistency within an item. In contrast, segmental analysis provides more extensive information about consistency of phoneme usage across multiple contexts and word-positions. The current research compared segmental and lexical inconsistency metrics in preschool-aged children with PD, CAS, and typical development (TD) to determine how inconsistency varies with age in typical and disordered speakers, and whether CAS and PD were differentiated equally well by both assessment levels. Whereas lexical and segmental analyses may be influenced by listener characteristics or speaker intelligibility, the acoustic signal is less vulnerable to these factors. In addition, the acoustic signal may reveal information which is not evident in the perceptual signal. A second focus of the current research was motivated by Blumstein et al.'s (1980) classic study on voice onset time (VOT) in adults with acquired apraxia of speech (AOS) which demonstrated a motor impairment underlying AOS. In the current study, VOT analyses were conducted to determine the relationship between age and group with the voicing distribution for bilabial and alveolar plosives. Findings revealed that 3-year-olds evidenced significantly higher inconsistency than 5-year-olds; segmental inconsistency approached 0% in 5-year-olds with TD, whereas it persisted in children with PD and CAS suggesting that for child in this age-range, inconsistency is a feature of speech disorder rather than typical development (Holm et al., 2007). Likewise, whereas segmental and lexical inconsistency were

  18. The World Report on Disability in relation to the development of speech-language pathology in Viet Nam.

    Science.gov (United States)

    Atherton, Marie; Dung, Nguyễn Thị Ngọc; Nhân, Võ Hoàng

    2013-02-01

    Wylie, McAllister, Davidson, and Marshall (2013) argue that recommendations made within the World Report on Disability provide an opportunity for speech-language pathologists to consider new ways of developing services for people with communication and swallowing disorders. They propose that current approaches to the delivery of speech-language pathology services are largely embedded within the medical model of impairment, thereby limiting the ability of services to meet the needs of people in a holistic manner. In this paper, the criticality of selecting an appropriate service delivery model is discussed within the context of a recently established post-graduate speech therapy education programme in Viet Nam. Driving forces for the implementation of the program will be explored, as will the factors that determined the choice of service delivery. Opportunities and challenges to the long-term viability of the program and the program's potential to meet the needs of persons with communication and swallowing disorders in Viet Nam will be considered.

  19. Modulation of ovarian steroidogenesis by adiponectin during delayed embryonic development of Cynopterus sphinx.

    Science.gov (United States)

    Anuradha; Krishna, Amitabh

    2014-09-01

    The aim of present study was to evaluate role of adiponectin in ovarian steroidogenesis during delayed embryonic development of Cynopterus sphinx. This study showed significantly low circulating adiponectin level and a decline in expression of adiponectin receptor 1 (AdipoR1) in the ovary during the period of delayed embryonic development as compared with the normal development. The adiponectin treatment in vivo during the period of delayed development caused significantly increased in circulating progesterone and estradiol levels together with increased expression of AdipoR1 in the ovary. The in vitro study confirmed the stimulatory effect of adiponectin on progesterone synthesis. Both in vivo and in vitro studies showed that the effects of adiponectin on ovarian steroidogenesis were mediated through increased expression of luteinizing hormone-receptor, steroidogenic acute regulatory protein and 3β-hydroxyl steroid dehydrogenase enzyme. The adiponectin treatment may also promote progesterone synthesis by modulating ovarian angiogenesis, cell survival and rate of apoptosis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial.

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  1. The role of candidate-gene CNTNAP2 in childhood apraxia of speech and specific language impairment.

    Science.gov (United States)

    Centanni, T M; Sanmann, J N; Green, J R; Iuzzini-Seigel, J; Bartlett, C; Sanger, W G; Hogan, T P

    2015-10-01

    Childhood apraxia of speech (CAS) is a debilitating pediatric speech disorder characterized by varying symptom profiles, comorbid deficits, and limited response to intervention. Specific Language Impairment (SLI) is an inherited pediatric language disorder characterized by delayed and/or disordered oral language skills including impaired semantics, syntax, and discourse. To date, the genes associated with CAS and SLI are not fully characterized. In the current study, we evaluated behavioral and genetic profiles of seven children with CAS and eight children with SLI, while ensuring all children were free of comorbid impairments. Deletions within CNTNAP2 were found in two children with CAS but not in any of the children with SLI. These children exhibited average to high performance on language and word reading assessments in spite of poor articulation scores. These findings suggest that genetic variation within CNTNAP2 may be related to speech production deficits. © 2015 Wiley Periodicals, Inc.

  2. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems.

    Science.gov (United States)

    Greene, Beth G; Logan, John S; Pisoni, David B

    1986-03-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.

  3. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    Science.gov (United States)

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  4. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  5. Modern Tools in Patient-Centred Speech Therapy for Romanian Language

    Directory of Open Access Journals (Sweden)

    Mirela Danubianu

    2016-03-01

    Full Text Available The most common way to communicate with those around us is speech. Suffering from a speech disorder can have negative social effects: from leaving the individuals with low confidence and moral to problems with social interaction and the ability to live independently like adults. The speech therapy intervention is a complex process having particular objectives such as: discovery and identification of speech disorder and directing the therapy to correction, recovery, compensation, adaptation and social integration of patients. Computer-based Speech Therapy systems are a real help for therapists by creating a special learning environment. The Romanian language is a phonetic one, with special linguistic particularities. This paper aims to present a few computer-based speech therapy systems developed for the treatment of various speech disorders specific to Romanian language.

  6. Age-related changes to spectral voice characteristics affect judgments of prosodic, segmental, and talker attributes for child and adult speech

    Science.gov (United States)

    Dilley, Laura C.; Wieland, Elizabeth A.; Gamache, Jessica L.; McAuley, J. Devin; Redford, Melissa A.

    2013-01-01

    Purpose As children mature, changes in voice spectral characteristics covary with changes in speech, language, and behavior. Spectral characteristics were manipulated to alter the perceived ages of talkers’ voices while leaving critical acoustic-prosodic correlates intact, to determine whether perceived age differences were associated with differences in judgments of prosodic, segmental, and talker attributes. Method Speech was modified by lowering formants and fundamental frequency, for 5-year-old children’s utterances, or raising them, for adult caregivers’ utterances. Next, participants differing in awareness of the manipulation (Exp. 1a) or amount of speech-language training (Exp. 1b) made judgments of prosodic, segmental, and talker attributes. Exp. 2 investigated the effects of spectral modification on intelligibility. Finally, in Exp. 3 trained analysts used formal prosody coding to assess prosodic characteristics of spectrally-modified and unmodified speech. Results Differences in perceived age were associated with differences in ratings of speech rate, fluency, intelligibility, likeability, anxiety, cognitive impairment, and speech-language disorder/delay; effects of training and awareness of the manipulation on ratings were limited. There were no significant effects of the manipulation on intelligibility or formally coded prosody judgments. Conclusions Age-related voice characteristics can greatly affect judgments of speech and talker characteristics, raising cautionary notes for developmental research and clinical work. PMID:23275414

  7. Development and evaluation of the British English coordinate response measure speech-in-noise test as an occupational hearing assessment tool.

    Science.gov (United States)

    Semeraro, Hannah D; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian A

    2017-10-01

    The studies described in this article outline the design and development of a British English version of the coordinate response measure (CRM) speech-in-noise (SiN) test. Our interest in the CRM is as a SiN test with high face validity for occupational auditory fitness for duty (AFFD) assessment. Study 1 used the method of constant stimuli to measure and adjust the psychometric functions of each target word, producing a speech corpus with equal intelligibility. After ensuring all the target words had similar intelligibility, for Studies 2 and 3, the CRM was presented in an adaptive procedure in stationary speech-spectrum noise to measure speech reception thresholds and evaluate the test-retest reliability of the CRM SiN test. Studies 1 (n = 20) and 2 (n = 30) were completed by normal-hearing civilians. Study 3 (n = 22) was completed by hearing impaired military personnel. The results display good test-retest reliability (95% confidence interval (CI) hearing impairment. The British English CRM using stationary speech-spectrum noise is a "ready to use" SiN test, suitable for investigation as an AFFD assessment tool for military personnel.

  8. Automatic Speech Acquisition and Recognition for Spacesuit Audio Systems

    Science.gov (United States)

    Ye, Sherry

    2015-01-01

    NASA has a widely recognized but unmet need for novel human-machine interface technologies that can facilitate communication during astronaut extravehicular activities (EVAs), when loud noises and strong reverberations inside spacesuits make communication challenging. WeVoice, Inc., has developed a multichannel signal-processing method for speech acquisition in noisy and reverberant environments that enables automatic speech recognition (ASR) technology inside spacesuits. The technology reduces noise by exploiting differences between the statistical nature of signals (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, ASR accuracy can be improved to the level at which crewmembers will find the speech interface useful. System components and features include beam forming/multichannel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, and ASR decoding. Arithmetic complexity models were developed and will help designers of real-time ASR systems select proper tasks when confronted with constraints in computational resources. In Phase I of the project, WeVoice validated the technology. The company further refined the technology in Phase II and developed a prototype for testing and use by suited astronauts.

  9. Audiovisual integration in children listening to spectrally degraded speech.

    Science.gov (United States)

    Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal

    2015-02-01

    The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.

  10. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  11. Kisspeptin regulates ovarian steroidogenesis during delayed embryonic development in the fruit bat, Cynopterus sphinx.

    Science.gov (United States)

    Anuradha; Krishna, Amitabh

    2017-11-01

    Cynopterus sphinx, a fruit bat, undergoes delayed embryonic development during the winter months, a period that corresponds to low levels of progesterone and estradiol synthesis by the ovary. Kisspeptins (KPs) are a group of neuropeptide hormones that act via G-protein coupled receptor 54 (GPR54) to stimulate hypothalamic secretion of Gonadotropin-releasing hormone, thereby regulating ovarian steroidogenesis, folliculogenesis, and ovulation. GPR54 is also expressed in the ovary, suggesting a direct role for KPs in ovarian steroidogenesis. The aim of present study was to determine if a low serum level of KP is responsible for reduced progesterone and estradiol levels during the period of delayed embryonic development in C. sphinx. Indeed, low serum KP abundance corresponded to reduced expression of GPR54 in ovarian luteal cells during the period of delayed development compared to normal development. In vitro and in vivo treatment with KP increased GPR54 abundance, via Extracellular signal regulated kinase and its downstream mediators, leading to increased progesterone synthesis in the ovary during delayed embryonic development. KP treatment also increased cholesterol uptake and elevated expression of Luteinizing hormone receptor and Steroid acute regulatory protein in the ovary, suggesting that elevation in circulating KP during delayed embryonic development may reactivate luteal activity. KPs may also enhance cell survival (BCL-2, reduced Caspase 3 activity) and angiogenesis (Vascular endothelium growth factor) during this period. The findings of this study thus demonstrate a regulatory role for KPs in the maintenance of luteal steroidogenesis during pregnancy in C. sphinx. © 2017 Wiley Periodicals, Inc.

  12. The Effects of Macroglossia on Speech: A Case Study

    Science.gov (United States)

    Mekonnen, Abebayehu Messele

    2012-01-01

    This article presents a case study of speech production in a 14-year-old Amharic-speaking boy. The boy had developed secondary macroglossia, related to a disturbance of growth hormones, following a history of normal speech development. Perceptual analysis combined with acoustic analysis and static palatography is used to investigate the specific…

  13. Measures to Evaluate the Effects of DBS on Speech Production

    Science.gov (United States)

    Weismer, Gary; Yunusova, Yana; Bunton, Kate

    2011-01-01

    The purpose of this paper is to review and evaluate measures of speech production that could be used to document effects of Deep Brain Stimulation (DBS) on speech performance, especially in persons with Parkinson disease (PD). A small set of evaluative criteria for these measures is presented first, followed by consideration of several speech physiology and speech acoustic measures that have been studied frequently and reported on in the literature on normal speech production, and speech production affected by neuromotor disorders (dysarthria). Each measure is reviewed and evaluated against the evaluative criteria. Embedded within this review and evaluation is a presentation of new data relating speech motions to speech intelligibility measures in speakers with PD, amyotrophic lateral sclerosis (ALS), and control speakers (CS). These data are used to support the conclusion that at the present time the slope of second formant transitions (F2 slope), an acoustic measure, is well suited to make inferences to speech motion and to predict speech intelligibility. The use of other measures should not be ruled out, however, and we encourage further development of evaluative criteria for speech measures designed to probe the effects of DBS or any treatment with potential effects on speech production and communication skills. PMID:24932066

  14. Asymmetric coupling between gestures and speech during reasoning

    NARCIS (Netherlands)

    Hoekstra, Lisette

    2017-01-01

    When children learn, insights displayed in gestures typically precede insights displayed in speech. In this study, we investigated how this leading role of gestures in cognitive development is evident in (and emerges from) the dynamic coupling between gestures and speech during one task. We

  15. Toward a Natural Speech Understanding System

    Science.gov (United States)

    1989-10-01

    toward the monolingual English 25 msec value. Miyawaki et a]. (1975) investigated the /ra/ - /la/ continuum with English and Japanese speakers...Standard Dictionary In order to evaluate some of the claims of the learning theory of speech recognition, a computer model was developed. The NEXus...discrimination of synthetic vowels. Language and Speech, 1962, 5, 171-189. Funk and Wagnalls New Standard Dictionary of the English Language. New York: Funk and

  16. Clear Speech - Mere Speech? How segmental and prosodic speech reduction shape the impression that speakers create on listeners

    DEFF Research Database (Denmark)

    Niebuhr, Oliver

    2017-01-01

    of reduction levels and perceived speaker attributes in which moderate reduction can make a better impression on listeners than no reduction. In addition to its relevance in reduction models and theories, this interplay is instructive for various fields of speech application from social robotics to charisma...... whether variation in the degree of reduction also has a systematic effect on the attributes we ascribe to the speaker who produces the speech signal. A perception experiment was carried out for German in which 46 listeners judged whether or not speakers showing 3 different combinations of segmental...... and prosodic reduction levels (unreduced, moderately reduced, strongly reduced) are appropriately described by 13 physical, social, and cognitive attributes. The experiment shows that clear speech is not mere speech, and less clear speech is not just reduced either. Rather, results revealed a complex interplay...

  17. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  18. Neurodevelopmental delay in children exposed in utero to hyperemesis gravidarum.

    Science.gov (United States)

    Fejzo, Marlena S; Magtira, Aromalyn; Schoenberg, Frederic Paik; Macgibbon, Kimber; Mullin, Patrick M

    2015-06-01

    The purpose of this study is to determine the frequency of emotional, behavioral, and learning disorders in children exposed in utero to hyperemesis gravidarum (HG) and to identify prognostic factors for these disorders. Neurodevelopmental outcomes of 312 children from 203 mothers with HG were compared to neurodevelopmental outcomes from 169 children from 89 unaffected mothers. Then the clinical profiles of patients with HG and a normal child outcome were compared to the clinical profiles of patients with HG and a child with neurodevelopmental delay to identify prognostic factors. Binary responses were analyzed using either a Chi-square or Fisher Exact test and continuous responses were analyzed using a t-test. Children exposed in utero to HG have a 3.28-fold increase in odds of a neurodevelopmental diagnosis including attention disorders, learning delay, sensory disorders, and speech and language delay (Pneurodevelopmental delay. We found no evidence for increased risk of 13 emotional, behavioral, and learning disorders, including autism, intellectual impairment, and obsessive-compulsive disorder. However, the study was not sufficiently powered to detect rare conditions. Medications, treatments, and preterm birth were not associated with an increased risk for neurodevelopmental delay. Women with HG are at a significantly increased risk of having a child with neurodevelopmental delay. Common antiemetic treatments were not linked to neurodevelopmental delay, but early symptoms may play a role. There is an urgent need to address whether aggressive treatment that includes vitamin and nutrient supplementation in women with early symptoms of severe nausea of pregnancy decreases the risk of neurodevelopmental delay. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. The Apraxia of Speech Rating Scale: a tool for diagnosis and description of apraxia of speech.

    Science.gov (United States)

    Strand, Edythe A; Duffy, Joseph R; Clark, Heather M; Josephs, Keith

    2014-01-01

    The purpose of this report is to describe an initial version of the Apraxia of Speech Rating Scale (ASRS), a scale designed to quantify the presence or absence, relative frequency, and severity of characteristics frequently associated with apraxia of speech (AOS). In this paper we report intra-judge and inter-judge reliability, as well as indices of validity, for the ASRS which was completed for 133 adult participants with a neurodegenerative speech or language disorder, 56 of whom had AOS. The overall inter-judge ICC among three clinicians was 0.94 for the total ASRS score and 0.91 for the number of AOS characteristics identified as present. Intra-judge ICC measures were high, ranging from 0.91 to 0.98. Validity was demonstrated on the basis of strong correlations with independent clinical diagnosis, as well as strong correlations of ASRS scores with independent clinical judgments of AOS severity. Results suggest that the ASRS is a potentially useful tool for documenting the presence and severity of characteristics of AOS. At this point in its development it has good potential for broader clinical use and for better subject description in AOS research. The Apraxia of Speech Rating Scale: A new tool for diagnosis and description of apraxia of speech 1. The reader will be able to explain characteristics of apraxia of speech. 2. The reader will be able to demonstrate use of a rating scale to document the presence and severity of speech characteristics. 3. The reader will be able to explain the reliability and validity of the ASRS. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  1. Differences between the production of [s] and [ʃ] in the speech of adults, typically developing children, and children with speech sound disorders: An ultrasound study.

    Science.gov (United States)

    Francisco, Danira Tavares; Wertzner, Haydée Fiszbein

    2017-01-01

    This study describes the criteria that are used in ultrasound to measure the differences between the tongue contours that produce [s] and [ʃ] sounds in the speech of adults, typically developing children (TDC), and children with speech sound disorder (SSD) with the phonological process of palatal fronting. Overlapping images of the tongue contours that resulted from 35 subjects producing the [s] and [ʃ] sounds were analysed to select 11 spokes on the radial grid that were spread over the tongue contour. The difference was calculated between the mean contour of the [s] and [ʃ] sounds for each spoke. A cluster analysis produced groups with some consistency in the pattern of articulation across subjects and differentiated adults and TDC to some extent and children with SSD with a high level of success. Children with SSD were less likely to show differentiation of the tongue contours between the articulation of [s] and [ʃ].

  2. Luteal cell steroidogenesis in relation to delayed embryonic development in the Indian short-nosed fruit bat, Cynopterus sphinx.

    Science.gov (United States)

    Meenakumari, Karukayil J; Banerjee, Arnab; Krishna, Amitabh

    2009-01-01

    The primary aim of this study was to determine the possible cause of slow or delayed embryonic development in Cynopterus sphinx by investigating morphological and steroidogenic changes in the corpus luteum (CL) and circulating hormone concentrations during two pregnancies of a year. This species showed delayed post-implantational embryonic development during gastrulation of the first pregnancy. Morphological features of the CL showed normal luteinization during both pregnancies. The CL did not change significantly in luteal cell size during the delay period of the first pregnancy as compared with the second pregnancy. The circulating progesterone and 17beta-estradiol concentrations were significantly lower during the period of delayed embryonic development as compared with the same stage of embryonic development during the second pregnancy. We also showed a marked decline in the activity of 3beta-hydroxysteroid dehydrogenase, P450 side chain cleavage enzyme, and steroidogenic acute regulatory peptide in the CL during the delay period. This may cause low circulating progesterone and estradiol synthesis and consequently delay embryonic development. What causes the decrease in steroidogenic factors in the CL during the period of delayed development in C. sphinx is under investigation.

  3. Idaho's Three-Tiered System for Speech-Language Paratherapist Training and Utilization.

    Science.gov (United States)

    Longhurst, Thomas M.

    1997-01-01

    Discusses the development and current implementation of Idaho's three-tiered system of speech-language paratherapists. Support personnel providing speech-language services to learners with special communication needs in educational settings must obtain one of three certification levels: (1) speech-language aide, (2) associate degree…

  4. DEVELOPING VISUAL NOVEL GAME WITH SPEECH-RECOGNITION INTERACTIVITY TO ENHANCE STUDENTS’ MASTERY ON ENGLISH EXPRESSIONS

    Directory of Open Access Journals (Sweden)

    Elizabeth Anggraeni Amalo

    2017-11-01

    Full Text Available The teaching of English-expressions has always been done through conversation samples in form of written texts, audio recordings, and videos. In the meantime, the development of computer-aided learning technology has made autonomous language learning possible. Game, as one of computer-aided learning technology products, can serve as a medium to provide educational contents like that of language teaching and learning. Visual Novel is considered as a conversational game that is suitable to be combined with English-expressions material. Unlike the other click-based interaction Visual Novel Games, the visual novel game in this research implements speech recognition as the interaction trigger. Hence, this paper aims at elaborating how visual novel games are utilized to deliver English-expressions with speech recognition command for the interaction. This research used Research and Development (R&D method with Experimental design through control and experimental groups to measure its effectiveness in enhancing students’ English-expressions mastery. ANOVA was utilized to prove the significant differences between the control and experimental groups. It is expected that the result of this development and experiment can devote benefits to the English teaching and learning, especially on English-expressions.

  5. Schizophrenia alters intra-network functional connectivity in the caudate for detecting speech under informational speech masking conditions.

    Science.gov (United States)

    Zheng, Yingjun; Wu, Chao; Li, Juanhua; Li, Ruikeng; Peng, Hongjun; She, Shenglin; Ning, Yuping; Li, Liang

    2018-04-04

    Speech recognition under noisy "cocktail-party" environments involves multiple perceptual/cognitive processes, including target detection, selective attention, irrelevant signal inhibition, sensory/working memory, and speech production. Compared to health listeners, people with schizophrenia are more vulnerable to masking stimuli and perform worse in speech recognition under speech-on-speech masking conditions. Although the schizophrenia-related speech-recognition impairment under "cocktail-party" conditions is associated with deficits of various perceptual/cognitive processes, it is crucial to know whether the brain substrates critically underlying speech detection against informational speech masking are impaired in people with schizophrenia. Using functional magnetic resonance imaging (fMRI), this study investigated differences between people with schizophrenia (n = 19, mean age = 33 ± 10 years) and their matched healthy controls (n = 15, mean age = 30 ± 9 years) in intra-network functional connectivity (FC) specifically associated with target-speech detection under speech-on-speech-masking conditions. The target-speech detection performance under the speech-on-speech-masking condition in participants with schizophrenia was significantly worse than that in matched healthy participants (healthy controls). Moreover, in healthy controls, but not participants with schizophrenia, the strength of intra-network FC within the bilateral caudate was positively correlated with the speech-detection performance under the speech-masking conditions. Compared to controls, patients showed altered spatial activity pattern and decreased intra-network FC in the caudate. In people with schizophrenia, the declined speech-detection performance under speech-on-speech masking conditions is associated with reduced intra-caudate functional connectivity, which normally contributes to detecting target speech against speech masking via its functions of suppressing masking-speech signals.

  6. Development of a subway operation incident delay model using accelerated failure time approaches.

    Science.gov (United States)

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Automated speech quality monitoring tool based on perceptual evaluation

    OpenAIRE

    Vozňák, Miroslav; Rozhon, Jan

    2010-01-01

    The paper deals with a speech quality monitoring tool which we have developed in accordance with PESQ (Perceptual Evaluation of Speech Quality) and is automatically running and calculating the MOS (Mean Opinion Score). Results are stored into database and used in a research project investigating how meteorological conditions influence the speech quality in a GSM network. The meteorological station, which is located in our university campus provides information about a temperature,...

  8. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  9. Performance Assessment of Dynaspeak Speech Recognition System on Inflight Databases

    National Research Council Canada - National Science Library

    Barry, Timothy

    2004-01-01

    .... To aid in the assessment of various commercially available speech recognition systems, several aircraft speech databases have been developed at the Air Force Research Laboratory's Human Effectiveness Directorate...

  10. Melatonin regulates delayed embryonic development in the short-nosed fruit bat, Cynopterus sphinx.

    Science.gov (United States)

    Banerjee, Arnab; Meenakumari, K J; Udin, S; Krishna, A

    2009-12-01

    The aim of the present study was to evaluate the seasonal variation in serum melatonin levels and their relationship to the changes in the serum progesterone level, ovarian steroidogenesis, and embryonic development during two successive pregnancies of Cynopterus sphinx. Circulating melatonin concentrations showed two peaks; one coincided with the period of low progesterone synthesis and delayed embryonic development, whereas the second peak coincided with regressing corpus luteum. This finding suggests that increased serum melatonin level during November-December may be responsible for delayed embryonic development by suppressing progesterone synthesis. The study showed increased melatonin receptors (MTNR1A and MTNR1B) in the corpus luteum and in the utero-embryonic unit during the period of delayed embryonic development. The in vitro study showed that a high dose of melatonin suppressed progesterone synthesis, whereas a lower dose of melatonin increased progesterone synthesis by the ovary. The effects of melatonin on ovarian steroidogenesis are mediated through changes in the expression of peripheral-type benzodiazepine receptor, P450 side chain cleavage enzyme, and LH receptor proteins. This study further showed a suppressive impact of melatonin on the progesterone receptor (PGR) in the utero-embryonic unit; this effect might contribute to delayed embryonic development in C. sphinx. The results of the present study thus suggest that a high circulating melatonin level has a dual contribution in retarding embryonic development in C. sphinx by impairing progesterone synthesis as well as by inhibiting progesterone action by reducing expression of PGR in the utero-embryonic unit.

  11. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children. PMID:29674986

  12. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Directory of Open Access Journals (Sweden)

    Wendy Doubé

    2018-04-01

    Full Text Available Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  13. Prolactin modulates luteal activity in the short-nosed fruit bat, Cynopterus sphinx during delayed embryonic development.

    Science.gov (United States)

    Anuradha; Krishna, Amitabh

    2017-07-01

    The aim of this study was to evaluate the role of prolactin as a modulator of luteal steroidogenesis during the period of delayed embryonic development in Cynopterus sphinx. A marked decline in circulating prolactin levels was noted during the months of November through December coinciding with the period of decreased serum progesterone and delayed embryonic development. The seasonal changes in serum prolactin levels correlated positively with circulating progesterone (P) level, but inversely with circulating melatonin level during first pregnancy showing delayed development in Cynopterus sphinx. The results also showed decreased expression of prolactin receptor-short form (PRL-RS) both in the corpus luteum and in the utero-embryonic unit during the period of delayed embryonic development. Bats treated in vivo with prolactin during the period of delayed development showed significant increase in serum progesterone and estradiol levels together with significant increase in the expression of PRL-RS, luteinizing hormone receptor (LH-R), steroidogenic acute receptor protein (STAR) and 3β-hydroxysteroid dehydrogenase (3β-HSD) in the ovary. Prolactin stimulated ovarian angiogenesis (vascular endothelial growth factor) and cell survival (B-cell lymphoma 2) in vivo. Significant increases in ovarian progesterone production and the expression of prolactin-receptor, LH-R, STAR and 3β-HSD proteins were noted following the exposure of LH or prolactin in vitro during the delayed period. In conclusion, short-day associated increased melatonin level may be responsible for decreased prolactin release during November-December. The decline in prolactin level might play a role in suppressing P and estradiol-17β (E2) estradiol levels thereby causing delayed embryonic development in C. sphinx. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Children with autism spectrum disorders who do not develop phrase speech in the preschool years.

    Science.gov (United States)

    Norrelgen, Fritjof; Fernell, Elisabeth; Eriksson, Mats; Hedvall, Åsa; Persson, Clara; Sjölin, Maria; Gillberg, Christopher; Kjellmer, Liselotte

    2015-11-01

    There is uncertainty about the proportion of children with autism spectrum disorders who do not develop phrase speech during the preschool years. The main purpose of this study was to examine this ratio in a population-based community sample of children. The cohort consisted of 165 children (141 boys, 24 girls) with autism spectrum disorders aged 4-6 years followed longitudinally over 2 years during which time they had received intervention at a specialized autism center. In this study, data collected at the 2-year follow-up were used. Three categories of expressive language were defined: nonverbal, minimally verbal, and phrase speech. Data from the Vineland Adaptive Behavior Scales-II were used to classify expressive language. A secondary objective of the study was to analyze factors that might be linked to verbal ability, namely, child age, cognitive level, autism subtype and severity of core autism symptoms, developmental regression, epilepsy or other medical conditions, and intensity of intervention. The proportion of children who met the criteria for nonverbal, minimally verbal, and phrase speech were 15%, 10%, and 75%, respectively. The single most important factor linked to expressive language was the child's cognitive level, and all children classified as being nonverbal or minimally verbal had intellectual disability. © The Author(s) 2014.

  15. Rule-Based Storytelling Text-to-Speech (TTS Synthesis

    Directory of Open Access Journals (Sweden)

    Ramli Izzad

    2016-01-01

    Full Text Available In recent years, various real life applications such as talking books, gadgets and humanoid robots have drawn the attention to pursue research in the area of expressive speech synthesis. Speech synthesis is widely used in various applications. However, there is a growing need for an expressive speech synthesis especially for communication and robotic. In this paper, global and local rule are developed to convert neutral to storytelling style speech for the Malay language. In order to generate rules, modification of prosodic parameters such as pitch, intensity, duration, tempo and pauses are considered. Modification of prosodic parameters is examined by performing prosodic analysis on a story collected from an experienced female and male storyteller. The global and local rule is applied in sentence level and synthesized using HNM. Subjective tests are conducted to evaluate the synthesized storytelling speech quality of both rules based on naturalness, intelligibility, and similarity to the original storytelling speech. The results showed that global rule give a better result than local rule

  16. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  17. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  18. Speech Rate Entrainment in Children and Adults With and Without Autism Spectrum Disorder.

    Science.gov (United States)

    Wynn, Camille J; Borrie, Stephanie A; Sellers, Tyra P

    2018-05-03

    Conversational entrainment, a phenomenon whereby people modify their behaviors to match their communication partner, has been evidenced as critical to successful conversation. It is plausible that deficits in entrainment contribute to the conversational breakdowns and social difficulties exhibited by people with autism spectrum disorder (ASD). This study examined speech rate entrainment in children and adult populations with and without ASD. Sixty participants including typically developing children, children with ASD, typically developed adults, and adults with ASD participated in a quasi-conversational paradigm with a pseudoconfederate. The confederate's speech rate was digitally manipulated to create slow and fast speech rate conditions. Typically developed adults entrained their speech rate in the quasi-conversational paradigm, using a faster rate during the fast speech rate conditions and a slower rate during the slow speech rate conditions. This entrainment pattern was not evident in adults with ASD or in children populations. Findings suggest that speech rate entrainment is a developmentally acquired skill and offers preliminary evidence of speech rate entrainment deficits in adults with ASD. Impairments in this area may contribute to the conversational breakdowns and social difficulties experienced by this population. Future work is needed to advance this area of inquiry.

  19. Neural networks engaged in short-term memory rehearsal are disrupted by irrelevant speech in human subjects.

    Science.gov (United States)

    Kopp, Franziska; Schröger, Erich; Lipka, Sigrid

    2004-01-02

    Rehearsal mechanisms in human short-term memory are increasingly understood in the light of both behavioural and neuroanatomical findings. However, little is known about the cooperation of participating brain structures and how such cooperations are affected when memory performance is disrupted. In this paper we use EEG coherence as a measure of synchronization to investigate rehearsal processes and their disruption by irrelevant speech in a delayed serial recall paradigm. Fronto-central and fronto-parietal theta (4-7.5 Hz), beta (13-20 Hz), and gamma (35-47 Hz) synchronizations are shown to be involved in our short-term memory task. Moreover, the impairment in serial recall due to irrelevant speech was preceded by a reduction of gamma band coherence. Results suggest that the irrelevant speech effect has its neural basis in the disruption of left-lateralized fronto-central networks. This stresses the importance of gamma band activity for short-term memory operations.

  20. The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease

    Science.gov (United States)

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-01-01

    Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…

  1. [Effect of speech estimation on social anxiety].

    Science.gov (United States)

    Shirotsuki, Kentaro; Sasagawa, Satoko; Nomura, Shinobu

    2009-02-01

    This study investigates the effect of speech estimation on social anxiety to further understanding of this characteristic of Social Anxiety Disorder (SAD). In the first study, we developed the Speech Estimation Scale (SES) to assess negative estimation before giving a speech which has been reported to be the most fearful social situation in SAD. Undergraduate students (n = 306) completed a set of questionnaires, which consisted of the Short Fear of Negative Evaluation Scale (SFNE), the Social Interaction Anxiety Scale (SIAS), the Social Phobia Scale (SPS), and the SES. Exploratory factor analysis showed an adequate one-factor structure with eight items. Further analysis indicated that the SES had good reliability and validity. In the second study, undergraduate students (n = 315) completed the SFNE, SIAS, SPS, SES, and the Self-reported Depression Scale (SDS). The results of path analysis showed that fear of negative evaluation from others (FNE) predicted social anxiety, and speech estimation mediated the relationship between FNE and social anxiety. These results suggest that speech estimation might maintain SAD symptoms, and could be used as a specific target for cognitive intervention in SAD.

  2. Automated analysis of free speech predicts psychosis onset in high-risk youths

    Science.gov (United States)

    Bedi, Gillinder; Carrillo, Facundo; Cecchi, Guillermo A; Slezak, Diego Fernández; Sigman, Mariano; Mota, Natália B; Ribeiro, Sidarta; Javitt, Daniel C; Copelli, Mauro; Corcoran, Cheryl M

    2015-01-01

    Background/Objectives: Psychiatry lacks the objective clinical tests routinely used in other specializations. Novel computerized methods to characterize complex behaviors such as speech could be used to identify and predict psychiatric illness in individuals. AIMS: In this proof-of-principle study, our aim was to test automated speech analyses combined with Machine Learning to predict later psychosis onset in youths at clinical high-risk (CHR) for psychosis. Methods: Thirty-four CHR youths (11 females) had baseline interviews and were assessed quarterly for up to 2.5 years; five transitioned to psychosis. Using automated analysis, transcripts of interviews were evaluated for semantic and syntactic features predicting later psychosis onset. Speech features were fed into a convex hull classification algorithm with leave-one-subject-out cross-validation to assess their predictive value for psychosis outcome. The canonical correlation between the speech features and prodromal symptom ratings was computed. Results: Derived speech features included a Latent Semantic Analysis measure of semantic coherence and two syntactic markers of speech complexity: maximum phrase length and use of determiners (e.g., which). These speech features predicted later psychosis development with 100% accuracy, outperforming classification from clinical interviews. Speech features were significantly correlated with prodromal symptoms. Conclusions: Findings support the utility of automated speech analysis to measure subtle, clinically relevant mental state changes in emergent psychosis. Recent developments in computer science, including natural language processing, could provide the foundation for future development of objective clinical tests for psychiatry. PMID:27336038

  3. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  4. Measuring Speech Comprehensibility in Students with Down Syndrome

    Science.gov (United States)

    Yoder, Paul J.; Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose: There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based…

  5. Rhetorical Analysis as Introductory Speech: Jumpstarting Student Engagement

    Science.gov (United States)

    Malone, Marc P.

    2012-01-01

    When students enter the basic public speaking classroom,When students enter the basic public speaking classroom, they are asked to develop an introductory speech. This assignment typically focuses on a speech of self-introduction for which there are several pedagogical underpinnings: it provides an immediate and relatively stress-free speaking…

  6. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  7. Out-of-synchrony speech entrainment in developmental dyslexia.

    Science.gov (United States)

    Molinaro, Nicola; Lizarazu, Mikel; Lallier, Marie; Bourguignon, Mathieu; Carreiras, Manuel

    2016-08-01

    Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. An analysis of the masking of speech by competing speech using self-report data.

    Science.gov (United States)

    Agus, Trevor R; Akeroyd, Michael A; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.

  9. Optimizing acoustical conditions for speech intelligibility in classrooms

    Science.gov (United States)

    Yang, Wonyoung

    High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with

  10. A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription

    Science.gov (United States)

    Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.

    2012-01-01

    The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…

  11. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  12. Speech and swallowing outcomes in buccal mucosa carcinoma

    Directory of Open Access Journals (Sweden)

    Sunila John

    2011-01-01

    Full Text Available Buccal carcinoma is one of the most common malignant neoplasms among all oral cancers in India. Understanding the role of speech language pathologists (SLPs in the domains of evaluation and management strategies of this condition is limited, especially in the Indian context. This is a case report of a young adult with recurrent squamous cell carcinoma of the buccal mucosa with no deleterious habits usually associated with buccal mucosa carcinoma. Following composite resection, pectoralis major myocutaneous flap reconstruction, he developed severe oral dysphagia and demonstrated unintelligible speech. This case report focuses on the issues of swallowing and speech deficits in buccal mucosa carcinoma that need to be addressed by SLPs, and the outcomes of speech and swallowing rehabilitation and prognostic issues.

  13. Detection of cardiac activity changes from human speech

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav; Mikulec, Martin; Mehic, Miralem

    2015-05-01

    Impact of changes in blood pressure and pulse from human speech is disclosed in this article. The symptoms of increased physical activity are pulse, systolic and diastolic pressure. There are many methods of measuring and indicating these parameters. The measurements must be carried out using devices which are not used in everyday life. In most cases, the measurement of blood pressure and pulse following health problems or other adverse feelings. Nowadays, research teams are trying to design and implement modern methods in ordinary human activities. The main objective of the proposal is to reduce the delay between detecting the adverse pressure and to the mentioned warning signs and feelings. Common and frequent activity of man is speaking, while it is known that the function of the vocal tract can be affected by the change in heart activity. Therefore, it can be a useful parameter for detecting physiological changes. A method for detecting human physiological changes by speech processing and artificial neural network classification is described in this article. The pulse and blood pressure changes was induced by physical exercises in this experiment. The set of measured subjects was formed by ten healthy volunteers of both sexes. None of the subjects was a professional athlete. The process of the experiment was divided into phases before, during and after physical training. Pulse, systolic, diastolic pressure was measured and voice activity was recorded after each of them. The results of this experiment describe a method for detecting increased cardiac activity from human speech using artificial neural network.

  14. Specific acoustic models for spontaneous and dictated style in indonesian speech recognition

    Science.gov (United States)

    Vista, C. B.; Satriawan, C. H.; Lestari, D. P.; Widyantoro, D. H.

    2018-03-01

    The performance of an automatic speech recognition system is affected by differences in speech style between the data the model is originally trained upon and incoming speech to be recognized. In this paper, the usage of GMM-HMM acoustic models for specific speech styles is investigated. We develop two systems for the experiments; the first employs a speech style classifier to predict the speech style of incoming speech, either spontaneous or dictated, then decodes this speech using an acoustic model specifically trained for that speech style. The second system uses both acoustic models to recognise incoming speech and decides upon a final result by calculating a confidence score of decoding. Results show that training specific acoustic models for spontaneous and dictated speech styles confers a slight recognition advantage as compared to a baseline model trained on a mixture of spontaneous and dictated training data. In addition, the speech style classifier approach of the first system produced slightly more accurate results than the confidence scoring employed in the second system.

  15. Grammatical realization of Russian etiquette speech genres: The ...

    African Journals Online (AJOL)

    The article is devoted to the issue of grammatical approach application during the teaching of Russian etiquette speech for foreigners. In the practice of teaching Russian as a foreign language, the issues related to the development of etiquette speech genres arise at all stages of language learning, beginning with the first ...

  16. Freedom of Speech and Adolescent Public School Students

    Science.gov (United States)

    Hussain, Murad

    2008-01-01

    Some legal cases on the freedom of speech in adolescent public school students are discussed. It is suggested that schools, social scientists and psychologists should build a social consensus on the extent to which the freedom of speech for abusive students can be allowed so as not to affect development of other students.

  17. Part-of-speech effects on text-to-speech synthesis

    CSIR Research Space (South Africa)

    Schlunz, GI

    2010-11-01

    Full Text Available One of the goals of text-to-speech (TTS) systems is to produce natural-sounding synthesised speech. Towards this end various natural language processing (NLP) tasks are performed to model the prosodic aspects of the TTS voice. One of the fundamental...

  18. A diphone-based speech-synthesis system for British English

    NARCIS (Netherlands)

    Pijper, de J.R.

    1987-01-01

    This article describes a keyboard-to-speech system for British English synthetic speech based on diphones. It concentrates on the development and composition of the diphone inventory and briefly describes a computer program which makes it possible to quickly concatenate diphones and synthesise

  19. Predicting Prosody from Text for Text-to-Speech Synthesis

    CERN Document Server

    Rao, K Sreenivasa

    2012-01-01

    Predicting Prosody from Text for Text-to-Speech Synthesis covers the specific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing.

  20. 75 FR 26701 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-05-12

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... proposed compensation rates for Interstate TRS, Speech-to-Speech Services (STS), Captioned Telephone... costs reported in the data submitted to NECA by VRS providers. In this regard, document DA 10-761 also...

  1. Dysarthria in Mandarin-Speaking Children with Cerebral Palsy: Speech Subsystem Profiles

    Science.gov (United States)

    Chen, Li-Mei; Hustad, Katherine C.; Kent, Ray D.; Lin, Yu Ching

    2018-01-01

    Purpose: This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences. Method: Data from 6 children…

  2. Predicting automatic speech recognition performance over communication channels from instrumental speech quality and intelligibility scores

    NARCIS (Netherlands)

    Gallardo, L.F.; Möller, S.; Beerends, J.

    2017-01-01

    The performance of automatic speech recognition based on coded-decoded speech heavily depends on the quality of the transmitted signals, determined by channel impairments. This paper examines relationships between speech recognition performance and measurements of speech quality and intelligibility

  3. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  4. The Relationship between Socio-Economic Status and Lexical Development

    Science.gov (United States)

    Black, Esther; Peppe, Sue; Gibbon, Fiona

    2008-01-01

    The British Picture Vocabulary Scale, second edition (BPVS-II), a measure of receptive vocabulary, is widely used by speech and language therapists and researchers into speech and language disorders, as an indicator of language delay, but it has frequently been suggested that receptive vocabulary may be more associated with socio-economic status.…

  5. Precursors to language in preterm infants: speech perception abilities in the first year of life.

    Science.gov (United States)

    Bosch, Laura

    2011-01-01

    Language development in infants born very preterm is often compromised. Poor language skills have been described in preschoolers and differences between preterms and full terms, relative to early vocabulary size and morphosyntactical complexity, have also been identified. However, very few data are available concerning early speech perception abilities and their predictive value for later language outcomes. An overview of the results obtained in a prospective study exploring the link between early speech perception abilities and lexical development in the second year of life in a population of very preterm infants (≤32 gestation weeks) is presented. Specifically, behavioral measures relative to (a) native-language recognition and discrimination from a rhythmically distant and a rhythmically close nonfamiliar languages, and (b) monosyllabic word-form segmentation, were obtained and compared to data from full-term infants. Expressive vocabulary at two test ages (12 and 18 months, corrected age for gestation) was measured using the MacArthur Communicative Development Inventory. Behavioral results indicated that differences between preterm and control groups were present, but only evident when task demands were high in terms of language processing, selective attention to relevant information and memory load. When responses could be based on acquired knowledge from accumulated linguistic experience, between-group differences were no longer observed. Critically, while preterm infants responded satisfactorily to the native-language recognition and discrimination tasks, they clearly differed from full-term infants in the more challenging activity of extracting and retaining word-form units from fluent speech, a fundamental ability for starting to building a lexicon. Correlations between results from the language discrimination tasks and expressive vocabulary measures could not be systematically established. However, attention time to novel words in the word segmentation

  6. Co-Working: Parents' Conception of Roles in Supporting Their Children's Speech and Language Development

    Science.gov (United States)

    Davies, Karen E.; Marshall, Julie; Brown, Laura J. E.; Goldbart, Juliet

    2017-01-01

    Speech and language therapists' (SLTs) roles include enabling parents to provide intervention. We know little about how parents understand their role during speech and language intervention or whether these change during involvement with SLTs. The theory of conceptual change, applied to parents as adult learners, is used as a framework for…

  7. 75 FR 54040 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-09-03

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...; speech-to-speech (STS); pay-per-call (900) calls; types of calls; and equal access to interexchange... of a report, due April 16, 2011, addressing whether it is necessary for the waivers to remain in...

  8. Speech pathology in ancient India--a review of Sanskrit literature.

    Science.gov (United States)

    Savithri, S R

    1987-12-01

    This paper aims at highlighting the knowledge of the Sanskrit scholars of ancient times in the field of speech and language pathology. The information collected here is mainly from the Sanskrit texts written between 2000 B.C. and 1633 A.D. Some aspects of speech and language that have been dealt with in this review have been elaborately described in the original Sanskrit texts. The present paper, however, being limited in its scope, reviews only the essential facts, but not the details. The purpose is only to give a glimpse of the knowledge that the Sanskrit scholars of those times possessed. In brief, this paper is a review of Sanskrit literature for information on the origin and development of speech and language, speech production, normality of speech and language, and disorders of speech and language and their treatment.

  9. Understanding the nature of apraxia of speech: Theory, analysis, and treatment

    Directory of Open Access Journals (Sweden)

    Kirrie J. Ballard

    2010-08-01

    Full Text Available Researchers have interpreted the behaviours of individuals with acquired apraxia of speech (AOS as impairment of linguistic phonological processing, motor control, or both. Acoustic, kinematic, and perceptual studies of speech in more recent years have led to significant advances in our understanding of the disorder and wide acceptance that it affects phonetic - motoric planning of speech. However, newly developed methods for studying nonspeech motor control are providing new insights, indicating that the motor control impairment of AOS extends beyond speech and is manifest in nonspeech movements of the oral structures. We present the most recent developments in theory and methods to examine and define the nature of AOS. Theories of the disorder are then related to existing treatment approaches and the efficacy of these approaches is examined. Directions for development of new treatments are posited. It is proposed that treatment programmes driven by a principled account of how the motor system learns to produce skilled actions will provide the most efficient and effective framework for treating motorbased speech disorders. In turn, well controlled and theoretically motivated studies of treatment efficacy promise to stimulate further development of theoretical accounts and contribute to our understanding of AOS.

  10. Statistical Learning, Syllable Processing, and Speech Production in Healthy Hearing and Hearing-Impaired Preschool Children: A Mismatch Negativity Study.

    Science.gov (United States)

    Studer-Eichenberger, Esther; Studer-Eichenberger, Felix; Koenig, Thomas

    2016-01-01

    The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits.

  11. Delayed access to bilateral input alters cortical organization in children with asymmetric hearing

    Directory of Open Access Journals (Sweden)

    Melissa Jane Polonenko

    2018-01-01

    Full Text Available Bilateral hearing in early development protects auditory cortices from reorganizing to prefer the better ear. Yet, such protection could be disrupted by mismatched bilateral input in children with asymmetric hearing who require electric stimulation of the auditory nerve from a cochlear implant in their deaf ear and amplified acoustic sound from a hearing aid in their better ear (bimodal hearing. Cortical responses to bimodal stimulation were measured by electroencephalography in 34 bimodal users and 16 age-matched peers with normal hearing, and compared with the same measures previously reported for 28 age-matched bilateral implant users. Both auditory cortices increasingly favoured the better ear with delay to implanting the deaf ear; the time course mirrored that occurring with delay to bilateral implantation in unilateral implant users. Preference for the implanted ear tended to occur with ongoing implant use when hearing was poor in the non-implanted ear. Speech perception deteriorated with longer deprivation and poorer access to high-frequencies. Thus, cortical preference develops in children with asymmetric hearing but can be avoided by early provision of balanced bimodal stimulation. Although electric and acoustic stimulation differ, these inputs can work sympathetically when used bilaterally given sufficient hearing in the non-implanted ear.

  12. Perceptual statistical learning over one week in child speech production.

    Science.gov (United States)

    Richtsmeier, Peter T; Goffman, Lisa

    2017-07-01

    What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  14. Australian children with cleft palate achieve age-appropriate speech by 5 years of age.

    Science.gov (United States)

    Chacon, Antonia; Parkin, Melissa; Broome, Kate; Purcell, Alison

    2017-12-01

    Children with cleft palate demonstrate atypical speech sound development, which can influence their intelligibility, literacy and learning. There is limited documentation regarding how speech sound errors change over time in cleft palate speech and the effect that these errors have upon mono-versus polysyllabic word production. The objective of this study was to examine the phonetic and phonological speech skills of children with cleft palate at ages 3 and 5. A cross-sectional observational design was used. Eligible participants were aged 3 or 5 years with a repaired cleft palate. The Diagnostic Evaluation of Articulation and Phonology (DEAP) Articulation subtest and a non-standardised list of mono- and polysyllabic words were administered once for each child. The Profile of Phonology (PROPH) was used to analyse each child's speech. N = 51 children with cleft palate participated in the study. Three-year-old children with cleft palate produced significantly more speech errors than their typically-developing peers, but no difference was apparent at 5 years. The 5-year-olds demonstrated greater phonetic and phonological accuracy than the 3-year-old children. Polysyllabic words were more affected by errors than monosyllables in the 3-year-old group only. Children with cleft palate are prone to phonetic and phonological speech errors in their preschool years. Most of these speech errors approximate typically-developing children by 5 years. At 3 years, word shape has an influence upon phonological speech accuracy. Speech pathology intervention is indicated to support the intelligibility of these children from their earliest stages of development. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Emotionally conditioning the target-speech voice enhances recognition of the target speech under "cocktail-party" listening conditions.

    Science.gov (United States)

    Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang

    2018-05-01

    Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.

  16. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  17. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  18. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  19. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  20. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  1. Short-Term and Working Memory Impairments in Early-Implanted, Long-Term Cochlear Implant Users Are Independent of Audibility and Speech Production.

    Science.gov (United States)

    AuBuchon, Angela M; Pisoni, David B; Kronenberger, William G

    2015-01-01

    To determine whether early-implanted, long-term cochlear implant (CI) users display delays in verbal short-term and working memory capacity when processes related to audibility and speech production are eliminated. Twenty-three long-term CI users and 23 normal-hearing controls each completed forward and backward digit span tasks under testing conditions that differed in presentation modality (auditory or visual) and response output (spoken recall or manual pointing). Normal-hearing controls reproduced more lists of digits than the CI users, even when the test items were presented visually and the responses were made manually via touchscreen response. Short-term and working memory delays observed in CI users are not due to greater demands from peripheral sensory processes such as audibility or from overt speech-motor planning and response output organization. Instead, CI users are less efficient at encoding and maintaining phonological representations in verbal short-term memory using phonological and linguistic strategies during memory tasks.

  2. The Galker test of speech reception in noise

    DEFF Research Database (Denmark)

    Lauritsen, Maj-Britt Glenn; Söderström, Margareta; Kreiner, Svend

    2016-01-01

    PURPOSE: We tested "the Galker test", a speech reception in noise test developed for primary care for Danish preschool children, to explore if the children's ability to hear and understand speech was associated with gender, age, middle ear status, and the level of background noise. METHODS......: The Galker test is a 35-item audio-visual, computerized word discrimination test in background noise. Included were 370 normally developed children attending day care center. The children were examined with the Galker test, tympanometry, audiometry, and the Reynell test of verbal comprehension. Parents...... and daycare teachers completed questionnaires on the children's ability to hear and understand speech. As most of the variables were not assessed using interval scales, non-parametric statistics (Goodman-Kruskal's gamma) were used for analyzing associations with the Galker test score. For comparisons...

  3. Hidden Markov models in automatic speech recognition

    Science.gov (United States)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  4. Cortical activity patterns predict robust speech discrimination ability in noise

    Science.gov (United States)

    Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.

    2012-01-01

    The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331

  5. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  6. Development and Evaluation of a Speech Recognition Test for Persian Speaking Adults

    Directory of Open Access Journals (Sweden)

    Mohammad Mosleh

    2001-05-01

    Full Text Available Method and Materials: This research is carried out for development and evaluation of 25 phonemically balanced word lists for Persian speaking adults in two separate stages: development and evaluation. In the first stage, in order to balance the lists phonemically, frequency -of- occurrences of each 29phonems (6 vowels and 23 Consonants of the Persian language in adults speech are determined. This section showed some significant differences between some phonemes' frequencies. Then, all Persian monosyllabic words extracted from the Mo ‘in Persian dictionary. The semantically difficult words were refused and the appropriate words choosed according to judgment of 5 adult native speakers of Persian with high school diploma. 12 openset 25 word lists are prepared. The lists were recorded on magnetic tapes in an audio studio by a professional speaker of IRIB. "nIn the second stage, in order to evaluate the test's validity and reliability, 60 normal hearing adults (30 male, 30 female, were randomly selected and evaluated as test and retest. Findings: 1- Normal hearing adults obtained 92-1 0O scores for each list at their MCL through test-retest. 2- No significant difference was observed a/ in test-retest scores in each list (‘P>O.05 b/ between the lists at test or retest scores (P>0.05, c/between sex (P>0.05. Conclusion: This research is reliable and valid, the lists are phonemically balanced and equal in difficulty and valuable for evaluation of Persian speaking adults speech recognition.

  7. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  8. Collecting and evaluating speech recognition corpora for 11 South African languages

    CSIR Research Space (South Africa)

    Badenhorst, J

    2011-08-01

    Full Text Available . In addition, speech-based access to information may empower illiterate or semi-literate peo- ple, 98% of whom live in the developing world. SDSs can play a useful role in a wide range of applications. Of particular importance in Africa are applications... speech (i.e. appropriate for the recognition task in terms of the language used, the profile of the speakers, speaking style, etc.) This speech generally needs to be curated and transcribed prior to the development of ASR sys- tems, and for most...

  9. Effect of gap detection threshold on consistency of speech in children with speech sound disorder.

    Science.gov (United States)

    Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz

    2017-02-01

    The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. DEVELOPMENT OF AUTOMATED SPEECH RECOGNITION SYSTEM FOR EGYPTIAN ARABIC PHONE CONVERSATIONS

    Directory of Open Access Journals (Sweden)

    A. N. Romanenko

    2016-07-01

    Full Text Available The paper deals with description of several speech recognition systems for the Egyptian Colloquial Arabic. The research is based on the CALLHOME Egyptian corpus. The description of both systems, classic: based on Hidden Markov and Gaussian Mixture Models, and state-of-the-art: deep neural network acoustic models is given. We have demonstrated the contribution from the usage of speaker-dependent bottleneck features; for their extraction three extractors based on neural networks were trained. For their training three datasets in several languageswere used:Russian, English and differentArabic dialects.We have studied the possibility of application of a small Modern Standard Arabic (MSA corpus to derive phonetic transcriptions. The experiments have shown that application of the extractor obtained on the basis of the Russian dataset enables to increase significantly the quality of the Arabic speech recognition. We have also stated that the usage of phonetic transcriptions based on modern standard Arabic decreases recognition quality. Nevertheless, system operation results remain applicable in practice. In addition, we have carried out the study of obtained models application for the keywords searching problem solution. The systems obtained demonstrate good results as compared to those published before. Some ways to improve speech recognition are offered.

  11. Filled pause refinement based on the pronunciation probability for lecture speech.

    Directory of Open Access Journals (Sweden)

    Yan-Hua Long

    Full Text Available Nowadays, although automatic speech recognition has become quite proficient in recognizing or transcribing well-prepared fluent speech, the transcription of speech that contains many disfluencies remains problematic, such as spontaneous conversational and lecture speech. Filled pauses (FPs are the most frequently occurring disfluencies in this type of speech. Most recent studies have shown that FPs are widely believed to increase the error rates for state-of-the-art speech transcription, primarily because most FPs are not well annotated or provided in training data transcriptions and because of the similarities in acoustic characteristics between FPs and some common non-content words. To enhance the speech transcription system, we propose a new automatic refinement approach to detect FPs in British English lecture speech transcription. This approach combines the pronunciation probabilities for each word in the dictionary and acoustic language model scores for FP refinement through a modified speech recognition forced-alignment framework. We evaluate the proposed approach on the Reith Lectures speech transcription task, in which only imperfect training transcriptions are available. Successful results are achieved for both the development and evaluation datasets. Acoustic models trained on different styles of speech genres have been investigated with respect to FP refinement. To further validate the effectiveness of the proposed approach, speech transcription performance has also been examined using systems built on training data transcriptions with and without FP refinement.

  12. Speech Perception as a Multimodal Phenomenon

    OpenAIRE

    Rosenblum, Lawrence D.

    2008-01-01

    Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal s...

  13. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  14. Interfacing COTS Speech Recognition and Synthesis Software to a Lotus Notes Military Command and Control Database

    Science.gov (United States)

    Carr, Oliver

    2002-10-01

    Speech recognition and synthesis technologies have become commercially viable over recent years. Two current market leading products in speech recognition technology are Dragon NaturallySpeaking and IBM ViaVoice. This report describes the development of speech user interfaces incorporating these products with Lotus Notes and Java applications. These interfaces enable data entry using speech recognition and allow warnings and instructions to be issued via speech synthesis. The development of a military vocabulary to improve user interaction is discussed. The report also describes an evaluation in terms of speed of the various speech user interfaces developed using Dragon NaturallySpeaking and IBM ViaVoice with a Lotus Notes Command and Control Support System Log database.

  15. Interaction Between Syndromic and Non-Syndromic Factors Affecting Speech and Language Development in Treacher-Collins Syndrome

    Directory of Open Access Journals (Sweden)

    Marziyeh Poorjavad

    2011-09-01

    Full Text Available Background: Treacher-Collins syndrome is a congenital craniofacial disorder with multiple anomalies. This syndrome affects the maxilla, mandible, eyes, middle and outer ears, and soft palate. Conductive hearing loss due to the deformities of the middle and external ears is prevalent. The characteristics of this syndrome include multiple and serious threats to normal communication development in children. In this study, speech and language features of a Persian speaking child with this syndrome are presented.Case: The case was an 8-year old girl with Treacher-Collins syndrome and bilateral moderate conductive hearing loss due to atretic canal. In language and speech assessments, moderate hypernasality, numerous compensatory errors and morphosyntactic deficits were observed. There were 13 phonemes that were incorrectly produced at least in one position. Besides, she used 22 types of phonological processes that were abnormal and disappear before the age of three in normal Persian speaking children.Conclusion: Moderate hearing loss, velopharyngeal incompetency, malocclusion and dental anomalies, attention deficit/hyperactivity disorder (ADHD and environmental factors resulted in severe speech and language disorders in this case. These disorders affected her academic performance as well. Moderate hypernasality, numerous compensatory errors, and excessive and abnormal use of phonological processes were not presented as prevalent characteristics of Treacher-Collins syndrome in other resources.

  16. Capitalising on North American speech resources for the development of a South African English large vocabulary speech recognition system

    CSIR Research Space (South Africa)

    Kamper, H

    2014-11-01

    Full Text Available -West University, Vanderbijlpark, South Africa 2Human Language Technologies Research Group, Meraka Institute, CSIR, Pretoria, South Africa {etienne.barnard, marelie.davel, cvheerden}@gmail.com, {fdwet, jbadenhorst}@csir.co.za Abstract The NCHLT speech...

  17. Parenting Practices and Associations with Development Delays among Young Children in Dominican Republic.

    Science.gov (United States)

    Uwemedimo, Omolara Thomas; Howlader, Afrin; Pierret, Giselina

    According to the World Health Organization, >200 million children in low- and middle-income countries experience developmental delays. However, household structure and parenting practices have been minimally explored as potential correlates of developmental delay in low- and middle-income countries, despite potential as areas for intervention. The objective of the study was to examine associations of developmental delays with use of World Health Organization-recommended parenting practices among a clinic-based cohort of children aged 6-60 months attending in La Romana, Dominican Republic. This study was conducted among 74 caregiver-child pairs attending the growth-monitoring clinic at Hospital Francisco Gonzalvo in June 2015. The Malawi Developmental Assessment Tool was adapted and performed on each child to assess socioadaptive, fine motor, gross motor, and language development. The IMCI Household Level Survey Questionnaire was used to assess parenting practices. Fisher's exact test was used to determine associations significant at P children had a delay in at least 1 developmental domain. Most caregivers used scolding (43.2%) or spanking (44%) for child discipline. Children who were disciplined by spanking and scolding were more likely to have language delay (P = .007) and socioadaptive delay (P = .077), respectively. On regression analysis, children with younger primary caregivers had 7 times higher odds of language delay (adjusted odds ratio [AOR]: 7.35, 95% confidence interval [CI]: 1.52-35.61) and 4 times greater odds of any delay (AOR: 4.72, 95% CI: 1.01-22.22). In addition, children punished by spanking had 5 times higher odds of having language delay (AOR: 5.04, 95% CI: 1.13-22.39). Parenting practices such as harsh punishment and lack of positive parental reinforcement were found to have strong associations with language and socioadaptive delays. Likewise, delays were also more common among children with younger caregivers. Copyright © 2017 Icahn

  18. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  19. Computer-Mediated Input, Output and Feedback in the Development of L2 Word Recognition from Speech

    Science.gov (United States)

    Matthews, Joshua; Cheng, Junyu; O'Toole, John Mitchell

    2015-01-01

    This paper reports on the impact of computer-mediated input, output and feedback on the development of second language (L2) word recognition from speech (WRS). A quasi-experimental pre-test/treatment/post-test research design was used involving three intact tertiary level English as a Second Language (ESL) classes. Classes were either assigned to…

  20. THE DIRECTIVE SPEECH ACTS USED IN ENGLISH SPEAKING CLASS

    Directory of Open Access Journals (Sweden)

    Muhammad Khatib Bayanuddin

    2016-12-01

    Full Text Available This research discusses about an analysis of the directive speech acts used in english speaking class at the third semester of english speaking class of english study program of IAIN STS Jambi. The aims of this research are to describe the types of directive speech acts and politeness strategies that found in English speaking class. This research used descriptive qualitative method. This method used to describe clearly about the types and politeness strategies of directive speech acts based on the data in English speaking class. The result showed that in English speaking class that there are some types and politeness strategies of directive speech acts, such as: requestives, questions, requirements, prohibitives, permissives, and advisores as types, as well as on-record indirect strategies (prediction statement, strong obligation statement, possibility statement, weaker obligation statement, volitional statement, direct strategies (imperative, performative, and nonsentential strategies as politeness strategies. The achievement of this research are hoped can be additional knowledge about linguistics study, especially in directive speech acts and can be developed for future researches. Key words: directive speech acts, types, politeness strategies.

  1. Computational neural modeling of speech motor control in childhood apraxia of speech (CAS).

    NARCIS (Netherlands)

    Terband, H.R.; Maassen, B.A.M.; Guenther, F.H.; Brumberg, J.

    2009-01-01

    PURPOSE: Childhood apraxia of speech (CAS) has been associated with a wide variety of diagnostic descriptions and has been shown to involve different symptoms during successive stages of development. In the present study, the authors attempted to associate the symptoms of CAS in a particular

  2. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    Science.gov (United States)

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  3. Relationship between individual differences in speech processing and cognitive functions.

    Science.gov (United States)

    Ou, Jinghua; Law, Sam-Po; Fung, Roxana

    2015-12-01

    A growing body of research has suggested that cognitive abilities may play a role in individual differences in speech processing. The present study took advantage of a widespread linguistic phenomenon of sound change to systematically assess the relationships between speech processing and various components of attention and working memory in the auditory and visual modalities among typically developed Cantonese-speaking individuals. The individual variations in speech processing are captured in an ongoing sound change-tone merging in Hong Kong Cantonese, in which typically developed native speakers are reported to lose the distinctions between some tonal contrasts in perception and/or production. Three groups of participants were recruited, with a first group of good perception and production, a second group of good perception but poor production, and a third group of good production but poor perception. Our findings revealed that modality-independent abilities of attentional switching/control and working memory might contribute to individual differences in patterns of speech perception and production as well as discrimination latencies among typically developed speakers. The findings not only have the potential to generalize to speech processing in other languages, but also broaden our understanding of the omnipresent phenomenon of language change in all languages.

  4. Systematic Studies of Modified Vocalization: The Effect of Speech Rate on Speech Production Measures during Metronome-Paced Speech in Persons Who Stutter

    Science.gov (United States)

    Davidow, Jason H.

    2014-01-01

    Background: Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control…

  5. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Language and Speech Improvement for Kindergarten and First Grade. A Supplementary Handbook.

    Science.gov (United States)

    Cole, Roberta; And Others

    The 16-unit language and speech improvement handbook for kindergarten and first grade students contains an introductory section which includes a discussion of the child's developmental speech and language characteristics, a sound development chart, a speech and hearing language screening test, the Henja articulation test, and a general outline of…

  7. Treatment of Children with Speech Oral Placement Disorders (OPDs): A Paradigm Emerges

    Science.gov (United States)

    Bahr, Diane; Rosenfeld-Johnson, Sara

    2010-01-01

    Epidemiological research was used to develop the Speech Disorders Classification System (SDCS). The SDCS is an important speech diagnostic paradigm in the field of speech-language pathology. This paradigm could be expanded and refined to also address treatment while meeting the standards of evidence-based practice. The article assists that process…

  8. Delayed puberty in girls

    Science.gov (United States)

    ... sexual development - girls; Pubertal delay - girls; Constitutional delayed puberty ... In most cases of delayed puberty, growth changes just begin later than usual, sometimes called a late bloomer. Once puberty begins, it progresses normally. This pattern runs ...

  9. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  10. Freedom of Speech: Its Exercise and Its Interpretation

    Science.gov (United States)

    Turner, David A.

    2010-01-01

    Professor Roy Harris (2009) criticises me for ignoring freedom of speech in order to focus on "soft" issues, such as game theory, decision theory and chaos theory. In this response, I accept most of his arguments relating to freedom of speech, but argue that, in order to develop better systems of education, we need to pay more attention to the…

  11. Indonesian Text-To-Speech System Using Diphone Concatenative Synthesis

    Directory of Open Access Journals (Sweden)

    Sutarman

    2015-02-01

    Full Text Available In this paper, we describe the design and develop a database of Indonesian diphone synthesis using speech segment of recorded voice to be converted from text to speech and save it as audio file like WAV or MP3. In designing and develop a database of Indonesian diphone there are several steps to follow; First, developed Diphone database includes: create a list of sample of words consisting of diphones organized by prioritizing looking diphone located in the middle of a word if not at the beginning or end; recording the samples of words by segmentation. ;create diphones made with a tool Diphone Studio 1.3. Second, develop system using Microsoft Visual Delphi 6.0, includes: the conversion system from the input of numbers, acronyms, words, and sentences into representations diphone. There are two kinds of conversion (process alleged in analyzing the Indonesian text-to-speech system. One is to convert the text to be sounded to phonem and two, to convert the phonem to speech. Method used in this research is called Diphone Concatenative synthesis, in which recorded sound segments are collected. Every segment consists of a diphone (2 phonems. This synthesizer may produce voice with high level of naturalness. The Indonesian Text to Speech system can differentiate special phonemes like in ‘Beda’ and ‘Bedak’ but sample of other spesific words is necessary to put into the system. This Indonesia TTS system can handle texts with abbreviation, there is the facility to add such words.

  12. Free Speech Yearbook 1978.

    Science.gov (United States)

    Phifer, Gregg, Ed.

    The 17 articles in this collection deal with theoretical and practical freedom of speech issues. The topics include: freedom of speech in Marquette Park, Illinois; Nazis in Skokie, Illinois; freedom of expression in the Confederate States of America; Robert M. LaFollette's arguments for free speech and the rights of Congress; the United States…

  13. Lexical and Phonological Development in Children with Childhood Apraxia of Speech--A Commentary on Stoel-Gammon's "Relationships between Lexical and Phonological Development in Young Children"

    Science.gov (United States)

    Velleman, Shelley L.

    2011-01-01

    Although not the focus of her article, phonological development in young children with speech sound disorders of various types is highly germane to Stoel-Gammon's discussion (this issue) for at least two primary reasons. Most obvious is that typical processes and milestones of phonological development are the standards and benchmarks against which…

  14. Surgical and radiological effects upon the development of speech after total laryngectomy

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1981-01-01

    The data presented examine the relationship between postlaryngectomy communication method, and the extent of total laryngectomy and the use of radiation therapy. The expectations of speech therapy providers were also examined. The author interviewed 60 laryngectomy patients who were six months to 3 1/2 years postsurgery. Surgeries were grouped into four categories and correlated with communication method. The relationship was statistically significant with the most apparent deterrent effect exhibited only for the most extreme surgical excisions. There was no relationship with the use of radiation therapy. In many cases speech therapy providers' expectations were not supported by the data

  15. Visual context enhanced. The joint contribution of iconic gestures and visible speech to degraded speech comprehension.

    NARCIS (Netherlands)

    Drijvers, L.; Özyürek, A.

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech

  16. Prosodic influences on speech production in children with specific language impairment and speech deficits: kinematic, acoustic, and transcription evidence.

    Science.gov (United States)

    Goffman, L

    1999-12-01

    It is often hypothesized that young children's difficulties with producing weak-strong (iambic) prosodic forms arise from perceptual or linguistically based production factors. A third possible contributor to errors in the iambic form may be biological constraints, or biases, of the motor system. In the present study, 7 children with specific language impairment (SLI) and speech deficits were matched to same age peers. Multiple levels of analysis, including kinematic (modulation and stability of movement), acoustic, and transcription, were applied to children's productions of iambic (weak-strong) and trochaic (strong-weak) prosodic forms. Findings suggest that a motor bias toward producing unmodulated rhythmic articulatory movements, similar to that observed in canonical babbling, contribute to children's acquisition of metrical forms. Children with SLI and speech deficits show less mature segmental and speech motor systems, as well as decreased modulation of movement in later developing iambic forms. Further, components of prosodic and segmental acquisition develop independently and at different rates.

  17. Oral breathing and speech disorders in children

    Directory of Open Access Journals (Sweden)

    Silvia F. Hitos

    2013-07-01

    Conclusion: Mouth breathing can affect speech development, socialization, and school performance. Early detection of mouth breathing is essential to prevent and minimize its negative effects on the overall development of individuals.

  18. Speech and language abilities of children with the familial form of 22q11.2 deletion syndrome

    Directory of Open Access Journals (Sweden)

    Rakonjac Marijana

    2016-01-01

    Full Text Available The 22q11.2 Deletion Syndrome (22q11.2DS, which encompasses Shprintzen syndrome, DiGeorge and velocardiofacial syndrome, is the most common microdeletion syndrome in humans with an estimated incidence of approximately 1/4000 per live births. After Down syndrome, it is the second most common genetic syndrome associated with congenital heart malformations. The mode of inheritance of the 22q11.2DS is autosomal dominant. In approximately 72 - 94% of the cases the deletion has occurred de novo, while in 6 to 28% of patients deletion was inherited from a parent. As a part of a multidisciplinary study we examined the speech and language abilities of members of two families with inherited form of 22q11.2DS. The presence of 22q11.2 microdeletion was revealed by fluorescence in situ hybridization (FISH and/or multiplex ligation-dependent probe amplification (MLPA. In one family we detected 1.5 Mb 22q11.2 microdeletion, while in the other family we found 3Mb microdeletion. Patients from both families showed delays in cognitive, socio-emotional, speech and language development. Furthermore, we found considerable variability in the phenotypic characteristics of 22q11.2DS and the degree of speech-language pathology not only between different families with 22q11.2 deletion, but also among members of the same family. In addition, we detected no correlation between the phenotype and the size of 22q11.2 microdeletion.

  19. Role of adiponectin in delayed embryonic development of the short-nosed fruit bat, Cynopterus sphinx.

    Science.gov (United States)

    Anuradha; Krishna, Amitabh

    2014-12-01

    The aim of this study was to evaluate the role of adiponectin in the delayed embryonic development of Cynopterus sphinx. Adiponectin receptor (ADIPOR1) abundance was first observed to be lower during the delayed versus non-delayed periods of utero-embryonic unit development. The effects of adiponectin treatment on embryonic development were then evaluated during the period of delayed development. Exogenous treatment increased the in vivo rate of embryonic development, as indicated by an increase in weight, ADIPOR1 levels in the utero-embryonic unit, and histological changes in embryonic development. Treatment with adiponectin during embryonic diapause showed a significant increase in circulating progesterone and estradiol concentrations, and in production of their receptors in the utero-embryonic unit. The adiponectin-induced increase in estradiol synthesis was correlated with increased cell survival (BCL2 protein levels) and cell proliferation (PCNA protein levels) in the utero-embryonic unit, suggesting an indirect effect of adiponectin via estradiol synthesis by the ovary. An in vitro study further confirmed the in vivo findings that adiponectin treatment increases PCNA levels together with increased uptake of glucose by increasing the abundance of glucose transporter 8 (GLUT8) in the utero-embryonic unit. The in vitro study also revealed that adiponectin, together with estradiol but not alone, significantly increased ADIPOR1 protein levels. Thus, adiponectin works in concert with estradiol to increase glucose transport to the utero-embryonic unit and promote cell proliferation, which together accelerate embryonic development. © 2014 Wiley Periodicals, Inc.

  20. The attitudes of family physicians toward a child with delayed growth and development.

    Science.gov (United States)

    Aker, Servet; Şahin, Mustafa Kürşat; Kınalı, Ömer; Şimşek Karadağ, Elif; Korkmaz, Tuğba

    2017-09-01

    Aim The purpose of this study was to assess the attitude of family physicians toward a child with delayed growth and development. Primary healthcare professionals play a key role in monitoring growth and development, the best indicator of the child's health status. If delayed growth and development can be detected early, then it is usually possible to restore functioning. This descriptive study was performed in Samsun, Turkey, in May and June 2015. In total, 325 family physicians were included. The study consisted of two parts. In the first session of the research, the story of an 18-month-old child with delayed growth and development was presented using visual materials. An interview between the child's mother and a member of primary healthcare staff was then enacted by two of the authors using role-playing. Subsequently, participants were given the opportunity to ask the mother and member of primary healthcare staff questions about the case. During the sessions, two observers observed the participants, took notes and compared these after the presentation. In the second part of the study, the participants were asked to complete a questionnaire consisting of three open-ended questions. Findings When asking questions of the mother, family physicians generally used accusatory and judgmental language. One of the questions most commonly put to the mother was 'Do you think you are a good mother?' Family physicians were keen to provide instruction for the patient and relatives. Family physicians to a large extent thought that the problem of a child with delayed growth and development can be resolved through education. Family physicians' manner of establishing relations with the patient and relatives is inappropriate. We therefore think that they should receive on-going in-service training on the subject.

  1. Multisensory integration of speech sounds with letters vs. visual speech : only visual speech induces the mismatch negativity

    NARCIS (Netherlands)

    Stekelenburg, J.J.; Keetels, M.N.; Vroomen, J.H.M.

    2018-01-01

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect.

  2. Speech Research

    Science.gov (United States)

    Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.

  3. Represented Speech in Qualitative Health Research

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2017-01-01

    Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

  4. FOXP2 and the neuroanatomy of speech and language.

    Science.gov (United States)

    Vargha-Khadem, Faraneh; Gadian, David G; Copp, Andrew; Mishkin, Mortimer

    2005-02-01

    That speech and language are innate capacities of the human brain has long been widely accepted, but only recently has an entry point into the genetic basis of these remarkable faculties been found. The discovery of a mutation in FOXP2 in a family with a speech and language disorder has enabled neuroscientists to trace the neural expression of this gene during embryological development, track the effects of this gene mutation on brain structure and function, and so begin to decipher that part of our neural inheritance that culminates in articulate speech.

  5. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  6. [Clinical characteristics and speech therapy of lingua-apical articulation disorder].

    Science.gov (United States)

    Zhang, Feng-hua; Jin, Xing-ming; Zhang, Yi-wen; Wu, Hong; Jiang, Fan; Shen, Xiao-ming

    2006-03-01

    To explore the clinical characteristics and speech therapy of 62 children with lingua-apical articulation disorder. Peabody Picture Vocabulary Test (PPVT), Gesell development scales (Gesell), Wechsler Intelligence Scale for Preschool Children (WPPSI) and speech test were performed for 62 children at the ages of 3 to 8 years with lingua-apical articulation disorder. PPVT was used to measure receptive vocabulary skills. GESELL and WPPSI were utilized to represent cognitive and non-verbal ability. The speech test was adopted to assess the speech development. The children received speech therapy and auxiliary oral-motor functional training once or twice a week. Firstly the target sound was identified according to the speech development milestone, then the method of speech localization was used to clarify the correct articulation placement and manner. It was needed to change food character and administer oral-motor functional training for children with oral motor dysfunction. The 62 cases with the apical articulation disorder were classified into four groups. The combined pattern of the articulation disorder was the most common (40 cases, 64.5%), the next was apico-dental disorder (15 cases, 24.2%). The third was palatal disorder (4 cases, 6.5%) and the last one was the linguo-alveolar disorder (3 cases, 4.8%). The substitution errors of velar were the most common (95.2%), the next was omission errors (30.6%) and the last was absence of aspiration (12.9%). Oral motor dysfunction was found in some children with problems such as disordered joint movement of tongue and head, unstable jaw, weak tongue strength and poor coordination of tongue movement. Some children had feeding problems such as preference of eating soft food, keeping food in mouths, eating slowly, and poor chewing. After 5 to 18 times of therapy, the effective rate of speech therapy reached 82.3%. The lingua-apical articulation disorders can be classified into four groups. The combined pattern of the

  7. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  8. Automated recognition of helium speech. Phase I: Investigation of microprocessor based analysis/synthesis system

    Science.gov (United States)

    Jelinek, H. J.

    1986-01-01

    This is the Final Report of Electronic Design Associates on its Phase I SBIR project. The purpose of this project is to develop a method for correcting helium speech, as experienced in diver-surface communication. The goal of the Phase I study was to design, prototype, and evaluate a real time helium speech corrector system based upon digital signal processing techniques. The general approach was to develop hardware (an IBM PC board) to digitize helium speech and software (a LAMBDA computer based simulation) to translate the speech. As planned in the study proposal, this initial prototype may now be used to assess expected performance from a self contained real time system which uses an identical algorithm. The Final Report details the work carried out to produce the prototype system. Four major project tasks were: a signal processing scheme for converting helium speech to normal sounding speech was generated. The signal processing scheme was simulated on a general purpose (LAMDA) computer. Actual helium speech was supplied to the simulation and the converted speech was generated. An IBM-PC based 14 bit data Input/Output board was designed and built. A bibliography of references on speech processing was generated.

  9. Cortical oscillations and entrainment in speech processing during working memory load.

    Science.gov (United States)

    Hjortkjaer, Jens; Märcher-Rørsted, Jonatan; Fuglsang, Søren A; Dau, Torsten

    2018-02-02

    Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real-life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we developed an auditory n-back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n-back task. The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n-back task on the speech sequences in different levels of background noise. Increasing WM load at higher n-back levels was associated with a decrease in posterior alpha power as well as increased pupil dilations. Frontal theta power increased at the start of the trial and increased additionally with higher n-back level. The observed alpha-theta power changes are consistent with visual n-back paradigms suggesting general oscillatory correlates of WM processing load. Speech entrainment was measured as a linear mapping between the envelope of the speech signal and low-frequency cortical activity (level) decreased cortical speech envelope entrainment. Although entrainment persisted under high load, our results suggest a top-down influence of WM processing on cortical speech entrainment. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Development and testing of a decision aid for women considering delayed breast reconstruction.

    Science.gov (United States)

    Metcalfe, Kelly; Zhong, Toni; O'Neill, Anne C; McCready, David; Chan, Linda; Butler, Kate; Brennenstuhl, Sarah; Hofer, Stefan O P

    2018-03-01

    The decision to have post-mastectomy breast reconstruction (PMBR) is highly complex and many women feel ill equipped to make this decision. Decision aids have been advocated to promote patient involvement in decision-making by streamlining and standardizing communication between the patient and the health care professional. In this study, we report on the development and testing of a decision aid (DA) for breast cancer survivors considering delayed PMBR. The DA was developed and evaluated in three phases. The first phase included the development of the DA with input and review by practitioners and key stakeholders. The second phase involved pilot testing of the feasibility and acceptability of the DA with a convenience sample of women with delayed PMBR. The third phase involved a pretest/post-test evaluation of the DA for women who were making decisions about their PMBR options. The DA was developed using the Ottawa Decision Support Framework. In the second phase of the study, 21 women completed the acceptability survey, of whom 100% reported that they would recommend the DA to other women. In the third phase, decisional conflict decreased significantly (p < 0.001) and knowledge increased significantly (p < 0.001) from prior to using the DA to 1-2 weeks after using the DA. The DA is feasible and acceptable to women considering delayed PMBR. Furthermore, the DA is effective at reducing decisional conflict and increasing knowledge about delayed PMBR. The DA is an appropriate tool to be used in addition with standard care in women considering PMBR. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  11. Taking the Danish Speech Trainer from CALL to ICALL

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter

    2015-01-01

    Talebob (Speech Bob) is a newly developed interactive CALL-tool for training Danish speech with special regard to the pronunciation of highly idiomatic phrases. Talebob is currently being tested in primary schools in Nuuk, Hafnarfjörður and Tórshavn (where Danish is taught as a L2). The purpose...

  12. The Use of an Autonomous Pedagogical Agent and Automatic Speech Recognition for Teaching Sight Words to Students with Autism Spectrum Disorder

    Science.gov (United States)

    Saadatzi, Mohammad Nasser; Pennington, Robert C.; Welch, Karla C.; Graham, James H.; Scott, Renee E.

    2017-01-01

    In the current study, we examined the effects of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and constant time delay during the instruction of reading sight words aloud to young adults with autism spectrum disorder. We used a concurrent multiple baseline across participants design to…

  13. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2011-01-01

    In a sample of 46 children aged 4-7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants' speech, prosody, and voice were compared with data from 40 typically-developing children, 13…

  14. Computational Neural Modeling of Speech Motor Control in Childhood Apraxia of Speech (CAS)

    Science.gov (United States)

    Terband, Hayo; Maassen, Ben; Guenther, Frank H.; Brumberg, Jonathan

    2009-01-01

    Purpose: Childhood apraxia of speech (CAS) has been associated with a wide variety of diagnostic descriptions and has been shown to involve different symptoms during successive stages of development. In the present study, the authors attempted to associate the symptoms of CAS in a particular developmental stage with particular…

  15. Associations between speech features and phenotypic severity in Treacher Collins syndrome.

    Science.gov (United States)

    Asten, Pamela; Akre, Harriet; Persson, Christina

    2014-04-28

    Treacher Collins syndrome (TCS, OMIM 154500) is a rare congenital disorder of craniofacial development. Characteristic hypoplastic malformations of the ears, zygomatic arch, mandible and pharynx have been described in detail. However, reports on the impact of these malformations on speech are few. Exploring speech features and investigating if speech function is related to phenotypic severity are essential for optimizing follow-up and treatment. Articulation, nasal resonance, voice and intelligibility were examined in 19 individuals (5-74 years, median 34 years) divided into three groups comprising children 5-10 years (n = 4), adolescents 11-18 years (n = 4) and adults 29 years and older (n = 11). A speech composite score (0-6) was calculated to reflect the variability of speech deviations. TCS severity scores of phenotypic expression and total scores of Nordic Orofacial Test-Screening (NOT-S) measuring orofacial dysfunction were used in analyses of correlation with speech characteristics (speech composite scores). Children and adolescents presented with significantly higher speech composite scores (median 4, range 1-6) than adults (median 1, range 0-5). Nearly all children and adolescents (6/8) displayed speech deviations of articulation, nasal resonance and voice, while only three adults were identified with multiple speech aberrations. The variability of speech dysfunction in TCS was exhibited by individual combinations of speech deviations in 13/19 participants. The speech composite scores correlated with TCS severity scores and NOT-S total scores. Speech composite scores higher than 4 were associated with cleft palate. The percent of intelligible words in connected speech was significantly lower in children and adolescents (median 77%, range 31-99) than in adults (98%, range 93-100). Intelligibility of speech among the children was markedly inconsistent and clearly affecting the understandability. Multiple speech deviations were identified in

  16. The effects of bilingualism on children's perception of speech sounds

    NARCIS (Netherlands)

    Brasileiro, I.

    2009-01-01

    The general topic addressed by this dissertation is that of bilingualism, and more specifically, the topic of bilingual acquisition of speech sounds. The central question in this study is the following: does bilingualism affect children’s perceptual development of speech sounds? The term bilingual

  17. Bridging the Gap Between Speech and Language: Using Multimodal Treatment in a Child With Apraxia.

    Science.gov (United States)

    Tierney, Cheryl D; Pitterle, Kathleen; Kurtz, Marie; Nakhla, Mark; Todorow, Carlyn

    2016-09-01

    Childhood apraxia of speech is a neurologic speech sound disorder in which children have difficulty constructing words and sounds due to poor motor planning and coordination of the articulators required for speech sound production. We report the case of a 3-year-old boy strongly suspected to have childhood apraxia of speech at 18 months of age who used multimodal communication to facilitate language development throughout his work with a speech language pathologist. In 18 months of an intensive structured program, he exhibited atypical rapid improvement, progressing from having no intelligible speech to achieving age-appropriate articulation. We suspect that early introduction of sign language by family proved to be a highly effective form of language development, that when coupled with intensive oro-motor and speech sound therapy, resulted in rapid resolution of symptoms. Copyright © 2016 by the American Academy of Pediatrics.

  18. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

    Science.gov (United States)

    Drijvers, Linda; Ozyurek, Asli

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method:…

  19. Improving the speech intelligibility in classrooms

    Science.gov (United States)

    Lam, Choi Ling Coriolanus

    One of the major acoustical concerns in classrooms is the establishment of effective verbal communication between teachers and students. Non-optimal acoustical conditions, resulting in reduced verbal communication, can cause two main problems. First, they can lead to reduce learning efficiency. Second, they can also cause fatigue, stress, vocal strain and health problems, such as headaches and sore throats, among teachers who are forced to compensate for poor acoustical conditions by raising their voices. Besides, inadequate acoustical conditions can induce the usage of public address system. Improper usage of such amplifiers or loudspeakers can lead to impairment of students' hearing systems. The social costs of poor classroom acoustics will be large to impair the learning of children. This invisible problem has far reaching implications for learning, but is easily solved. Many researches have been carried out that they have accurately and concisely summarized the research findings on classrooms acoustics. Though, there is still a number of challenging questions remaining unanswered. Most objective indices for speech intelligibility are essentially based on studies of western languages. Even several studies of tonal languages as Mandarin have been conducted, there is much less on Cantonese. In this research, measurements have been done in unoccupied rooms to investigate the acoustical parameters and characteristics of the classrooms. The speech intelligibility tests, which based on English, Mandarin and Cantonese, and the survey were carried out on students aged from 5 years old to 22 years old. It aims to investigate the differences in intelligibility between English, Mandarin and Cantonese of the classrooms in Hong Kong. The significance on speech transmission index (STI) related to Phonetically Balanced (PB) word scores will further be developed. Together with developed empirical relationship between the speech intelligibility in classrooms with the variations

  20. Brain responses and looking behaviour during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life.

    Directory of Open Access Journals (Sweden)

    Elena V Kushnerenko

    2013-07-01

    Full Text Available The use of visual cues during the processing of audiovisual speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6-9 months to 14-16 months of age. We used eye-tracking to examine whether individual differences in visual attention during audiovisual processing of speech in 6 to 9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6-9 month old infants also participated in an event-related potential (ERP audiovisual task within the same experimental session. Language development was then followed-up at the age of 14-16 months, using two measures of language development, the Preschool Language Scale (PLS and the Oxford Communicative Development Inventory (CDI. The results show that those infants who were less efficient in auditory speech processing at the age of 6-9 months had lower receptive language scores at 14-16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audio-visually incongruent stimuli at 6-9 months were both significantly associated with language development at 14-16 months. These findings add to the understanding of individual differences in neural signatures of audiovisual processing and associated looking behaviour in infants.