WorldWideScience

Sample records for residual speech sound

  1. Phonetic Variability in Residual Speech Sound Disorders: Exploration of Subtypes

    Science.gov (United States)

    Preston, Jonathan L.; Koenig, Laura L.

    2011-01-01

    Purpose: To explore whether subgroups of children with residual speech sound disorders (R-SSDs) can be identified through multiple measures of token-to-token phonetic variability (changes in one spoken production to the next). Method: Children with R-SSDs were recorded during a rapid multisyllabic picture naming task and an oral diadochokinetic…

  2. Structural brain differences in school-age children with residual speech sound errors.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Mencl, W Einar; Frost, Stephen J; Hoeft, Fumiko; Fulbright, Robert K; Landi, Nicole; Grigorenko, Elena L; Seki, Ayumi; Felsenfeld, Susan; Pugh, Kenneth R

    2014-01-01

    The purpose of the study was to identify structural brain differences in school-age children with residual speech sound errors. Voxel based morphometry was used to compare gray and white matter volumes for 23 children with speech sound errors, ages 8;6-11;11, and 54 typically speaking children matched on age, oral language, and IQ. We hypothesized that regions associated with production and perception of speech sounds would differ between groups. Results indicated greater gray matter volumes for the speech sound error group relative to typically speaking controls in bilateral superior temporal gyrus. There was greater white matter volume in the corpus callosum for the speech sound error group, but less white matter volume in right lateral occipital gyrus. Results may indicate delays in neuronal pruning in critical speech regions or differences in the development of networks for speech perception and production. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Identifying Residual Speech Sound Disorders in Bilingual Children: A Japanese-English Case Study

    Science.gov (United States)

    Preston, Jonathan L.; Seki, Ayumi

    2011-01-01

    Purpose: To describe (a) the assessment of residual speech sound disorders (SSDs) in bilinguals by distinguishing speech patterns associated with second language acquisition from patterns associated with misarticulations and (b) how assessment of domains such as speech motor control and phonological awareness can provide a more complete…

  4. Speech Enhancement with Natural Sounding Residual Noise Based on Connected Time-Frequency Speech Presence Regions

    Directory of Open Access Journals (Sweden)

    Sørensen Karsten Vandborg

    2005-01-01

    Full Text Available We propose time-frequency domain methods for noise estimation and speech enhancement. A speech presence detection method is used to find connected time-frequency regions of speech presence. These regions are used by a noise estimation method and both the speech presence decisions and the noise estimate are used in the speech enhancement method. Different attenuation rules are applied to regions with and without speech presence to achieve enhanced speech with natural sounding attenuated background noise. The proposed speech enhancement method has a computational complexity, which makes it feasible for application in hearing aids. An informal listening test shows that the proposed speech enhancement method has significantly higher mean opinion scores than minimum mean-square error log-spectral amplitude (MMSE-LSA and decision-directed MMSE-LSA.

  5. Articulatory Phonetics for Residual Speech Sound Disorders: A Focus on /r/

    Science.gov (United States)

    Boyce, Suzanne E.

    2016-01-01

    Effective treatment for children with Residual Speech Sound Disorders (RSSD) requires in-depth knowledge of articulatory phonetics, but this level of detail may not be provided as part of typical clinical coursework. . Incorporating contemporary work in the basic articulatory sciences into clinical training becomes especially important at a time when new imaging technologies such as and ultrasound continue to inform our clinical understanding of speech disorders. This is particularly the case for the speech sound most likely to persist among children with RSSD--the North American English rhotic sound, /r/. The goal of this paper is to review important information about articulatory phonetics as it affects children with RSSD who present with /r/ production difficulties. The data presented are largely drawn from ultrasound and Magnetic Resonance Imaging (MRI) studies. This information will be placed into a clinical context by comparing productions of typical adult speakers to successful vs misarticulated productions of two children with persistent /r/ difficulties. PMID:26458201

  6. Perception of speech sounds in school-age children with speech sound disorders

    Science.gov (United States)

    Preston, Jonathan L.; Irwin, Julia R.; Turcios, Jacqueline

    2015-01-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System (Rvachew, 1994), which has been effectively used to assess preschoolers’ ability to perform goodness judgments, is explored for school-age children with residual speech errors (RSE). However, data suggest that this particular task may not be sensitive to perceptual differences in school-age children. The need for the development of clinical tools for assessment of speech perception in school-age children with RSE is highlighted, and clinical suggestions are provided. PMID:26458198

  7. Perception of Speech Sounds in School-Aged Children with Speech Sound Disorders.

    Science.gov (United States)

    Preston, Jonathan L; Irwin, Julia R; Turcios, Jacqueline

    2015-11-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System, which has been effectively used to assess preschoolers' ability to perform goodness judgments, is explored for school-aged children with residual speech errors (RSEs). However, data suggest that this particular task may not be sensitive to perceptual differences in school-aged children. The need for the development of clinical tools for assessment of speech perception in school-aged children with RSE is highlighted, and clinical suggestions are provided. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  8. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  9. Computer-based speech therapy for childhood speech sound disorders.

    Science.gov (United States)

    Furlong, Lisa; Erickson, Shane; Morris, Meg E

    2017-07-01

    With the current worldwide workforce shortage of Speech-Language Pathologists, new and innovative ways of delivering therapy to children with speech sound disorders are needed. Computer-based speech therapy may be an effective and viable means of addressing service access issues for children with speech sound disorders. To evaluate the efficacy of computer-based speech therapy programs for children with speech sound disorders. Studies reporting the efficacy of computer-based speech therapy programs were identified via a systematic, computerised database search. Key study characteristics, results, main findings and details of computer-based speech therapy programs were extracted. The methodological quality was evaluated using a structured critical appraisal tool. 14 studies were identified and a total of 11 computer-based speech therapy programs were evaluated. The results showed that computer-based speech therapy is associated with positive clinical changes for some children with speech sound disorders. There is a need for collaborative research between computer engineers and clinicians, particularly during the design and development of computer-based speech therapy programs. Evaluation using rigorous experimental designs is required to understand the benefits of computer-based speech therapy. The reader will be able to 1) discuss how computerbased speech therapy has the potential to improve service access for children with speech sound disorders, 2) explain the ways in which computer-based speech therapy programs may enhance traditional tabletop therapy and 3) compare the features of computer-based speech therapy programs designed for different client populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Interventions for Speech Sound Disorders in Children

    Science.gov (United States)

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  11. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  12. Speech perception in the presence of other sounds

    Science.gov (United States)

    Darwin, C. J.

    2005-04-01

    The human listener's remarkable ability to recognize speech when it is mixed with other sounds presents a challenge both to models of speech perception and to approaches to speech recognition. This talk will review some of the work on how human listeners can perceive speech in sound mixtures and will try to indicate areas that might be particularly fruitful for future research.

  13. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  14. Precision of working memory for speech sounds.

    Science.gov (United States)

    Joseph, Sabine; Iverson, Paul; Manohar, Sanjay; Fox, Zoe; Scott, Sophie K; Husain, Masud

    2015-01-01

    Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such "quantized" views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.

  15. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  16. When speech sounds like music.

    Science.gov (United States)

    Falk, Simone; Rathcke, Tamara; Dalla Bella, Simone

    2014-08-01

    Repetition can boost memory and perception. However, repeating the same stimulus several times in immediate succession also induces intriguing perceptual transformations and illusions. Here, we investigate the Speech to Song Transformation (S2ST), a massed repetition effect in the auditory modality, which crosses the boundaries between language and music. In the S2ST, a phrase repeated several times shifts to being heard as sung. To better understand this unique cross-domain transformation, we examined the perceptual determinants of the S2ST, in particular the role of acoustics. In 2 Experiments, the effects of 2 pitch properties and 3 rhythmic properties on the probability and speed of occurrence of the transformation were examined. Results showed that both pitch and rhythmic properties are key features fostering the transformation. However, some properties proved to be more conducive to the S2ST than others. Stable tonal targets that allowed for the perception of a musical melody led more often and quickly to the S2ST than scalar intervals. Recurring durational contrasts arising from segmental grouping favoring a metrical interpretation of the stimulus also facilitated the S2ST. This was, however, not the case for a regular beat structure within and across repetitions. In addition, individual perceptual abilities allowed to predict the likelihood of the S2ST. Overall, the study demonstrated that repetition enables listeners to reinterpret specific prosodic features of spoken utterances in terms of musical structures. The findings underline a tight link between language and music, but they also reveal important differences in communicative functions of prosodic structure in the 2 domains.

  17. Speech Sound Disorders in a Community Study of Preschool Children

    Science.gov (United States)

    McLeod, Sharynne; Harrison, Linda J.; McAllister, Lindy; McCormack, Jane

    2013-01-01

    Purpose: To undertake a community (nonclinical) study to describe the speech of preschool children who had been identified by parents/teachers as having difficulties "talking and making speech sounds" and compare the speech characteristics of those who had and had not accessed the services of a speech-language pathologist (SLP). Method:…

  18. Relationship between speech motor control and speech intelligibility in children with speech sound disorders.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Pukonen, Margit; Goshulak, Debra; Yu, Vickie Y; Kadis, Darren S; Kroll, Robert; Pang, Elizabeth W; De Nil, Luc F

    2013-01-01

    The current study was undertaken to investigate the impact of speech motor issues on the speech intelligibility of children with moderate to severe speech sound disorders (SSD) within the context of the PROMPT intervention approach. The word-level Children's Speech Intelligibility Measure (CSIM), the sentence-level Beginner's Intelligibility Test (BIT) and tests of speech motor control and articulation proficiency were administered to 12 children (3:11 to 6:7 years) before and after PROMPT therapy. PROMPT treatment was provided for 45 min twice a week for 8 weeks. Twenty-four naïve adult listeners aged 22-46 years judged the intelligibility of the words and sentences. For CSIM, each time a recorded word was played to the listeners they were asked to look at a list of 12 words (multiple-choice format) and circle the word while for BIT sentences, the listeners were asked to write down everything they heard. Words correctly circled (CSIM) or transcribed (BIT) were averaged across three naïve judges to calculate percentage speech intelligibility. Speech intelligibility at both the word and sentence level was significantly correlated with speech motor control, but not articulatory proficiency. Further, the severity of speech motor planning and sequencing issues may potentially be a limiting factor in connected speech intelligibility and highlights the need to target these issues early and directly in treatment. The reader will be able to: (1) outline the advantages and disadvantages of using word- and sentence-level speech intelligibility tests; (2) describe the impact of speech motor control and articulatory proficiency on speech intelligibility; and (3) describe how speech motor control and speech intelligibility data may provide critical information to aid treatment planning. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Speech and Language Skills of Parents of Children with Speech Sound Disorders

    Science.gov (United States)

    Lewis, Barbara A.; Freebairn, Lisa A.; Hansen, Amy J.; Miscimarra, Lara; Iyengar, Sudha K.; Taylor, H. Gerry

    2007-01-01

    Purpose: This study compared parents with histories of speech sound disorders (SSD) to parents without known histories on measures of speech sound production, phonological processing, language, reading, and spelling. Familial aggregation for speech and language disorders was also examined. Method: The participants were 147 parents of children with…

  20. Subtyping Children with Speech Sound Disorders by Endophenotypes

    Science.gov (United States)

    Lewis, Barbara A.; Avrich, Allison A.; Freebairn, Lisa A.; Taylor, H. Gerry; Iyengar, Sudha K.; Stein, Catherine M.

    2011-01-01

    Purpose: The present study examined associations of 5 endophenotypes (i.e., measurable skills that are closely associated with speech sound disorders and are useful in detecting genetic influences on speech sound production), oral motor skills, phonological memory, phonological awareness, vocabulary, and speeded naming, with 3 clinical criteria…

  1. Dimensions of Early Speech Sound Disorders: A Factor Analytic Study

    Science.gov (United States)

    Lewis, Barbara A.; Freebairn, Lisa A.; Hansen, Amy J.; Stein, Catherine M.; Shriberg, Lawrence D.; Iyengar, Sudha K.; Taylor, H. Gerry

    2006-01-01

    The goal of this study was to classify children with speech sound disorders (SSD) empirically, using factor analytic techniques. Participants were 3-7-year olds enrolled in speech/language therapy (N=185). Factor analysis of an extensive battery of speech and language measures provided support for two distinct factors, representing the skill…

  2. How is harmonicity used in grouping speech sounds?

    Science.gov (United States)

    Darwin, Chris

    2003-04-01

    This paper asks how a common property of voiced speech sounds harmonicity is used by the auditory system to improve the perception of speech in the presence of simultaneous competing sounds. We present data from three different experimental paradigms concerned, respectively, with the combination of sounds across different ears, different frequency regions, and different times. The first set of experiments qualify the conclusion that sounds from the same harmonic series fuse into a single object when presented to different ears. The second impose limits on the ability of harmonicity to combine information across different frequency regions. The third demonstrate the utility of continuity of pitch (compared with a vocal-tract size) in maintaining attention to a single sound source. Elucidating the mechanisms by which we segregate speech from background sounds requires proper consideration both of the structure of the speech signal and of the auditory system through which it passes.

  3. Evaluation of speech intelligibility in short-reverberant sound fields.

    Science.gov (United States)

    Shimokura, Ryota; Matsui, Toshie; Takaki, Yuya; Nishimura, Tadashi; Yamanaka, Toshiaki; Hosoi, Hiroshi

    2014-08-01

    The purpose of this study was to explore the differences in speech intelligibility in short-reverberant sound fields using deteriorated monosyllables. Generated using digital signal processing, deteriorated monosyllables can lack the redundancy of words, and thus may emphasize differences in sound fields in terms of speech clarity. Ten participants without any hearing disorders identified 100 monosyllables convolved with eight impulse responses measured in different short-reverberant sound fields (speech transmission index >0.6 and reverberation time speech recognition scores between normal and deteriorated monosyllables. Deterioration was produced using low-pass filtering (cut off frequency=1600Hz). Speech recognition scores associated with the deteriorated monosyllables were lower than those for the normal monosyllables. In addition, scores were more varied among the different sound fields, although this result was not significant according to an analysis of variance. In contrast, the variation among sound fields was significant for the normal monosyllables. When comparing the intelligibility scores to the acoustic parameters calculated from eight impulse responses, the speech recognition scores were the highest when the reverberant/direct sound energy ratio (R/D) was balanced. Although our deterioration procedure obscured differences in intelligibility score among the different sound fields, we have established that the R/D is a useful parameter for evaluating speech intelligibility in short-reverberant sound fields. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Intensive treatment with ultrasound visual feedback for speech sound errors in childhood apraxia

    Directory of Open Access Journals (Sweden)

    Jonathan L Preston

    2016-08-01

    Full Text Available Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients additional knowledge about their tongue shapes when attempting to produce sounds that are in error. The additional feedback may assist children with childhood apraxia of speech in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 diagnosed with childhood apraxia of speech attended 16 hours of speech therapy over a two-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor

  5. Auditory grouping in the perception of speech and complex sounds

    Science.gov (United States)

    Darwin, Chris; Rivenez, Marie

    2004-05-01

    This talk will give an overview of experimental work on auditory grouping in speech perception including the use of grouping cues in the extraction of source-specific auditory information, and the tracking of sound sources across time. Work on the perception of unattended speech sounds will be briefly reviewed and some recent experiments described demonstrating the importance of pitch differences in allowing lexical processing of speech on the unattended ear. The relationship between auditory grouping and auditory continuity will also be discussed together with recent experiments on the role of grouping in the perceptual continuity of complex sounds.

  6. The Articulatory Phonetics of /r/ for Residual Speech Errors.

    Science.gov (United States)

    Boyce, Suzanne E

    2015-11-01

    Effective treatment for children with residual speech errors (RSEs) requires in-depth knowledge of articulatory phonetics, but this level of detail may not be provided as part of typical clinical coursework. At a time when new imaging technologies such as ultrasound continue to inform our clinical understanding of speech disorders, incorporating contemporary work in the basic articulatory sciences into clinical training becomes especially important. This is particularly the case for the speech sound most likely to persist among children with RSEs-the North American English rhotic sound, /r/. The goal of this article is to review important information about articulatory phonetics as it affects children with RSE who present with /r/ production difficulties. The data presented are largely drawn from ultrasound and magnetic resonance imaging studies. This information will be placed in a clinical context by comparing productions of typical adult speakers to successful versus misarticulated productions of two children with persistent /r/ difficulties. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  7. Phonological Awareness and Types of Sound Errors in Preschoolers with Speech Sound Disorders

    Science.gov (United States)

    Preston, Jonathan; Edwards, Mary Louise

    2010-01-01

    Purpose: Some children with speech sound disorders (SSD) have difficulty with literacy-related skills, particularly phonological awareness (PA). This study investigates the PA skills of preschoolers with SSD by using a regression model to evaluate the degree to which PA can be concurrently predicted by types of speech sound errors. Method:…

  8. Multisensory integration of speech sounds with letters vs. visual speech: only visual speech induces the mismatch negativity.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean

    2018-03-14

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  10. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia.

    Science.gov (United States)

    Preston, Jonathan L; Leece, Megan C; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients with additional knowledge about their tongue shapes when attempting to produce sounds that are erroneous. The additional feedback may assist children with childhood apraxia of speech (CAS) in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice) may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 years diagnosed with CAS attended 16 h of speech therapy over a 2-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor patterns and improved articulation of

  11. Linkage of Speech Sound Disorder to Reading Disability Loci

    Science.gov (United States)

    Smith, Shelley D.; Pennington, Bruce F.; Boada, Richard; Shriberg, Lawrence D.

    2005-01-01

    Background: Speech sound disorder (SSD) is a common childhood disorder characterized by developmentally inappropriate errors in speech production that greatly reduce intelligibility. SSD has been found to be associated with later reading disability (RD), and there is also evidence for both a cognitive and etiological overlap between the two…

  12. Non-speech oral motor treatment for children with developmental speech sound disorders.

    Science.gov (United States)

    Lee, Alice S-Y; Gibbon, Fiona E

    2015-03-25

    Children with developmental speech sound disorders have difficulties in producing the speech sounds of their native language. These speech difficulties could be due to structural, sensory or neurophysiological causes (e.g. hearing impairment), but more often the cause of the problem is unknown. One treatment approach used by speech-language therapists/pathologists is non-speech oral motor treatment (NSOMT). NSOMTs are non-speech activities that aim to stimulate or improve speech production and treat specific speech errors. For example, using exercises such as smiling, pursing, blowing into horns, blowing bubbles, and lip massage to target lip mobility for the production of speech sounds involving the lips, such as /p/, /b/, and /m/. The efficacy of this treatment approach is controversial, and evidence regarding the efficacy of NSOMTs needs to be examined. To assess the efficacy of non-speech oral motor treatment (NSOMT) in treating children with developmental speech sound disorders who have speech errors. In April 2014 we searched the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE (R) and Ovid MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, Education Resources Information Center (ERIC), PsycINFO and 11 other databases. We also searched five trial and research registers, checked the reference lists of relevant titles identified by the search and contacted researchers to identify other possible published and unpublished studies. Randomised and quasi-randomised controlled trials that compared (1) NSOMT versus placebo or control; and (2) NSOMT as adjunctive treatment or speech intervention versus speech intervention alone, for children aged three to 16 years with developmental speech sound disorders, as judged by a speech and language therapist. Individuals with an intellectual disability (e.g. Down syndrome) or a physical disability were not excluded. The Trials Search Co-ordinator of the Cochrane Developmental, Psychosocial and

  13. The Clinical Practice of Speech and Language Therapists with Children with Phonologically Based Speech Sound Disorders

    Science.gov (United States)

    Oliveira, Carla; Lousada, Marisa; Jesus, Luis M. T.

    2015-01-01

    Children with speech sound disorders (SSD) represent a large number of speech and language therapists' caseloads. The intervention with children who have SSD can involve different therapy approaches, and these may be articulatory or phonologically based. Some international studies reveal a widespread application of articulatory based approaches in…

  14. Between-Word Simplification Patterns in the Continuous Speech of Children with Speech Sound Disorders

    Science.gov (United States)

    Klein, Harriet B.; Liu-Shea, May

    2009-01-01

    Purpose: This study was designed to identify and describe between-word simplification patterns in the continuous speech of children with speech sound disorders. It was hypothesized that word combinations would reveal phonological changes that were unobserved with single words, possibly accounting for discrepancies between the intelligibility of…

  15. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  16. Integration of letters and speech sounds in the human brain

    NARCIS (Netherlands)

    van Atteveldt, Nienke; Formisano, Elia; Goebel, Rainer; Blomert, Leo

    2004-01-01

    Most people acquire literacy skills with remarkable ease, even though the human brain is not evolutionarily adapted to this relatively new cultural phenomenon. Associations between letters and speech sounds form the basis of reading in alphabetic scripts. We investigated the functional neuroanatomy

  17. Severe Speech Sound Disorders: An Integrated Multimodal Intervention

    Science.gov (United States)

    King, Amie M.; Hengst, Julie A.; DeThorne, Laura S.

    2013-01-01

    Purpose: This study introduces an integrated multimodal intervention (IMI) and examines its effectiveness for the treatment of persistent and severe speech sound disorders (SSD) in young children. The IMI is an activity-based intervention that focuses simultaneously on increasing the "quantity" of a child's meaningful productions of target words…

  18. What Influences Literacy Outcome in Children with Speech Sound Disorder?

    Science.gov (United States)

    Peterson, Robin L.; Pennington, Bruce F.; Shriberg, Lawrence D.; Boada, Richard

    2009-01-01

    Purpose: In this study, the authors evaluated literacy outcome in children with histories of speech sound disorder (SSD) who were characterized along 2 dimensions: broader language function and persistence of SSD. In previous studies, authors have demonstrated that each dimension relates to literacy but have not disentangled their effects.…

  19. Treatment Decisions for Children with Speech-Sound Disorders

    Science.gov (United States)

    Kamhi, Alan G.

    2006-01-01

    PURPOSE: In this article, I consider how research, clinical expertise, client values, a clinician's theoretical perspective, and service delivery considerations affect the decisions that clinicians make to treat children with speech-sound disorders (SSD). METHOD: After reviewing the research on phonological treatment, I discuss how a clinician's…

  20. Correlates of Phonological Awareness in Preschoolers with Speech Sound Disorders

    Science.gov (United States)

    Rvachew, Susan; Grawburg, Meghann

    2006-01-01

    Purpose: The purpose of this study was to examine the relationships among variables that may contribute to poor phonological awareness (PA) skills in preschool-aged children with speech sound disorders (SSD). Method: Ninety-five 4- and 5-year-old children with SSD were assessed during the spring of their prekindergarten year. Linear structural…

  1. Phonological Processing and Reading in Children with Speech Sound Disorders

    Science.gov (United States)

    Rvachew, Susan

    2007-01-01

    Purpose: To examine the relationship between phonological processing skills prior to kindergarten entry and reading skills at the end of 1st grade, in children with speech sound disorders (SSD). Method: The participants were 17 children with SSD and poor phonological processing skills (SSD-low PP), 16 children with SSD and good phonological…

  2. Intervention Efficacy and Intensity for Children with Speech Sound Disorder

    Science.gov (United States)

    Allen, Melissa M.

    2013-01-01

    Purpose: Clinicians do not have an evidence base they can use to recommend optimum intervention intensity for preschool children who present with speech sound disorder (SSD). This study examined the effect of dose frequency on phonological performance and the efficacy of the multiple oppositions approach. Method: Fifty-four preschool children with…

  3. Left Lateralized Enhancement of Orofacial Somatosensory Processing Due to Speech Sounds

    Science.gov (United States)

    Ito, Takayuki; Johns, Alexis R.; Ostry, David J.

    2013-01-01

    Purpose: Somatosensory information associated with speech articulatory movements affects the perception of speech sounds and vice versa, suggesting an intimate linkage between speech production and perception systems. However, it is unclear which cortical processes are involved in the interaction between speech sounds and orofacial somatosensory…

  4. Phonetic and phonemic acquisition : Normative data in English and Dutch speech sound development

    NARCIS (Netherlands)

    Priester, G. H.; Post, W. J.; Goorhuis-Brouwer, S. M.

    Objective: Comparison of normative data in English and Dutch speech sound development in young children. Research questions were: Which normative data are present concerning speech sound development in children between two and six years of age? In which way are the speech sounds examined? What are

  5. Correlates of Spelling Abilities in Children with Early Speech Sound Disorders.

    Science.gov (United States)

    Lewis, Barbara A.; Freebairn, Lisa A.; Taylor, H. Gerry

    2002-01-01

    Examines the correlates of spelling impairment in children with histories of early speech sound disorders. Reveals that children with preschool speech sound and language problems become poorer spellers at school age than did children with preschool speech sound disorders only. Concludes that familial aggregation of spelling disorders suggests a…

  6. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    Science.gov (United States)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  7. Implications of diadochokinesia in children with speech sound disorder.

    Science.gov (United States)

    Wertzner, Haydée Fiszbein; Pagan-Neves, Luciana de Oliveira; Alves, Renata Ramos; Barrozo, Tatiane Faria

    2013-01-01

    To verify the performance of children with and without speech sound disorder in oral motor skills measured by oral diadochokinesia according to age and gender and to compare the results by two different methods of analysis. Participants were 72 subjects aged from 5 years to 7 years and 11 months divided into four subgroups according to the presence of speech sound disorder (Study Group and Control Group) and age (6 years and 5 months). Diadochokinesia skills were assessed by the repetition of the sequences 'pa', 'ta', 'ka' and 'pataka' measured both manually and by the software Motor Speech Profile®. Gender was statistically different for both groups but it did not influence on the number of sequences per second produced. Correlation between the number of sequences per second and age was observed for all sequences (except for 'ka') only for the control group children. Comparison between groups did not indicate differences between the number of sequences per second and age. Results presented strong agreement between the values of oral diadochokinesia measured manually and by MSP. This research demonstrated the importance of using different methods of analysis on the functional evaluation of oro-motor processing aspects of children with speech sound disorder and evidenced the oro-motor difficulties on children aged under than eight years old.

  8. Prevalence and Predictors of Persistent Speech Sound Disorder at Eight Years Old: Findings from a Population Cohort Study

    Science.gov (United States)

    Wren, Yvonne; Miller, Laura L.; Peters, Tim J.; Emond, Alan; Roulstone, Sue

    2016-01-01

    Purpose: The purpose of this study was to determine prevalence and predictors of persistent speech sound disorder (SSD) in children aged 8 years after disregarding children presenting solely with common clinical distortions (i.e., residual errors). Method: Data from the Avon Longitudinal Study of Parents and Children (Boyd et al., 2012) were used.…

  9. Morphosyntax and phonological awareness in children with speech sound disorders.

    Science.gov (United States)

    Mortimer, Jennifer; Rvachew, Susan

    2008-12-01

    The goals of the current study were to examine concurrent and longitudinal relationships of expressive morphosyntax and phonological awareness in a group of children with speech sound disorders. Tests of phonological awareness were administered to 38 children at the end of their prekindergarten and kindergarten years. Speech samples were elicited and analyzed to obtain a set of expressive morphosyntax variables. Finite verb morphology and inflectional suffix use by prekindergarten children were found to predict significant unique variance in change in phonological awareness a year later. These results are consistent with previous research showing finite verb morphology to be a sensitive indicator of language impairment in English.

  10. Polysyllable Speech Accuracy and Predictors of Later Literacy Development in Preschool Children with Speech Sound Disorders

    Science.gov (United States)

    Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen

    2017-01-01

    Purpose: The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables--words of three or more syllables--are important to consider because unlike…

  11. Audio signal recognition for speech, music, and environmental sounds

    Science.gov (United States)

    Ellis, Daniel P. W.

    2003-10-01

    Human listeners are very good at all kinds of sound detection and identification tasks, from understanding heavily accented speech to noticing a ringing phone underneath music playing at full blast. Efforts to duplicate these abilities on computer have been particularly intense in the area of speech recognition, and it is instructive to review which approaches have proved most powerful, and which major problems still remain. The features and models developed for speech have found applications in other audio recognition tasks, including musical signal analysis, and the problems of analyzing the general ``ambient'' audio that might be encountered by an auditorily endowed robot. This talk will briefly review statistical pattern recognition for audio signals, giving examples in several of these domains. Particular emphasis will be given to common aspects and lessons learned.

  12. A Pilot Investigation of Speech Sound Disorder Intervention Delivered by Telehealth to School-Age Children

    Directory of Open Access Journals (Sweden)

    Sue Grogan-Johnson

    2011-05-01

    Full Text Available This article describes a school-based telehealth service delivery model and reports outcomes made by school-age students with speech sound disorders in a rural Ohio school district. Speech therapy using computer-based speech sound intervention materials was provided either by live interactive videoconferencing (telehealth, or conventional side-by-side intervention.  Progress was measured using pre- and post-intervention scores on the Goldman Fristoe Test of Articulation-2 (Goldman & Fristoe, 2002. Students in both service delivery models made significant improvements in speech sound production, with students in the telehealth condition demonstrating greater mastery of their Individual Education Plan (IEP goals. Live interactive videoconferencing thus appears to be a viable method for delivering intervention for speech sound disorders to children in a rural, public school setting. Keywords:  Telehealth, telerehabilitation, videoconferencing, speech sound disorder, speech therapy, speech-language pathology; E-Helper

  13. Cross-modal enhancement of the MMN to speech-sounds indicates early and automatic integration of letters and speech-sounds

    NARCIS (Netherlands)

    Froyen, Dries; Van Atteveldt, Nienke; Bonte, Milene L; Blomert, Leo

    2008-01-01

    Recently brain imaging evidence indicated that letter/speech-sound integration, necessary for establishing fluent reading, takes place in auditory association areas and that the integration is influenced by stimulus onset asynchrony (SOA) between the letter and the speech-sound. In the present

  14. Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction.

    Science.gov (United States)

    Ricketts, Todd A; Hornsby, Benjamin W Y

    2005-05-01

    This brief report discusses the affect of digital noise reduction (DNR) processing on aided speech recognition and sound quality measures in 14 adults fitted with a commercial hearing aid. Measures of speech recognition and sound quality were obtained in two different speech-in-noise conditions (71 dBA speech, +6 dB SNR and 75 dBA speech, +1 dB SNR). The results revealed that the presence or absence of DNR processing did not impact speech recognition in noise (either positively or negatively). Paired comparisons of sound quality for the same speech in noise signals, however, revealed a strong preference for DNR processing. These data suggest that at least one implementation of DNR processing is capable of providing improved sound quality, for speech in noise, in the absence of improved speech recognition.

  15. Speech Abilities in Preschool Children with Speech Sound Disorder with and without Co-Occurring Language Impairment

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A.

    2014-01-01

    Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…

  16. Speech-Language Pathologists' Assessment Practices for Children with Suspected Speech Sound Disorders: Results of a National Survey

    Science.gov (United States)

    Skahan, Sarah M.; Watson, Maggie; Lof, Gregory L.

    2007-01-01

    Purpose: This study examined assessment procedures used by speech-language pathologists (SLPs) when assessing children suspected of having speech sound disorders (SSD). This national survey also determined the information participants obtained from clients' speech samples, evaluation of non-native English speakers, and time spent on assessment.…

  17. Discrimination of speech and non-speech sounds following theta-burst stimulation of the motor cortex

    Directory of Open Access Journals (Sweden)

    Jack Charles Rogers

    2014-07-01

    Full Text Available Perceiving speech engages parts of the motor system involved in speech production. The role of the motor cortex in speech perception has been demonstrated using low-frequency repetitive transcranial magnetic stimulation (rTMS to suppress motor excitability in the lip representation and disrupt discrimination of lip-articulated speech sounds (Möttönen & Watkins, 2009. Another form of rTMS, continuous theta-burst stimulation (cTBS, can produce longer-lasting disruptive effects following a brief train of stimulation. We investigated the effects of cTBS on motor excitability and discrimination of speech and non-speech sounds. cTBS was applied for 40 seconds over either the hand or the lip representation of motor cortex. Motor-evoked potentials recorded from the lip and hand muscles in response to single pulses of TMS revealed no measurable change in motor excitability due to cTBS. This failure to replicate previous findings may reflect the unreliability of measurements of motor excitability related to inter-individual variability. We also measured the effects of cTBS on a listener’s ability to discriminate:(1 lip-articulated speech sounds from sounds not articulated by the lips (‘ba’vs.‘da’; (2 two speech sounds not articulated by the lips (‘ga’vs.‘da’; and (3 non-speech sounds produced by the hands (‘claps’vs.‘clicks’. Discrimination of lip-articulated speech sounds was impaired between 20 and 35 minutes after cTBS over the lip motor representation. Specifically, discrimination of across-category ba–da sounds presented with an 800-ms inter-stimulus interval was reduced to chance level performance. This effect was absent for speech sounds that do not require the lips for articulation and non-speech sounds. Stimulation over the hand motor representation did not affect discrimination of speech or non-speech sounds. These findings show that stimulation of the lip motor representation disrupts discrimination of speech

  18. Toward a Model of Pediatric Speech Sound Disorders (SSD) for Differential Diagnosis and Therapy Planning

    NARCIS (Netherlands)

    Terband, Hayo; Maassen, Bernardus; Maas, Edwin; van Lieshout, Pascal; Maassen, Ben; Terband, Hayo

    2016-01-01

    The classification and differentiation of pediatric speech sound disorders (SSD) is one of the main questions in the field of speech- and language pathology. Terms for classifying childhood and SSD and motor speech disorders (MSD) refer to speech production processes, and a variety of methods of

  19. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome.

    Science.gov (United States)

    Engineer, Crystal T; Rahebi, Kimiya C; Borland, Michael S; Buell, Elizabeth P; Centanni, Tracy M; Fink, Melyssa K; Im, Kwok W; Wilson, Linda G; Kilgard, Michael P

    2015-11-01

    Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Does seeing an Asian face make speech sound more accented?

    Science.gov (United States)

    Zheng, Yi; Samuel, Arthur G

    2017-08-01

    Prior studies have reported that seeing an Asian face makes American English sound more accented. The current study investigates whether this effect is perceptual, or if it instead occurs at a later decision stage. We first replicated the finding that showing static Asian and Caucasian faces can shift people's reports about the accentedness of speech accompanying the pictures. When we changed the static pictures to dubbed videos, reducing the demand characteristics, the shift in reported accentedness largely disappeared. By including unambiguous items along with the original ambiguous items, we introduced a contrast bias and actually reversed the shift, with the Asian-face videos yielding lower judgments of accentedness than the Caucasian-face videos. By changing to a mixed rather than blocked design, so that the ethnicity of the videos varied from trial to trial, we eliminated the difference in accentedness rating. Finally, we tested participants' perception of accented speech using the selective adaptation paradigm. After establishing that an auditory-only accented adaptor shifted the perception of how accented test words are, we found that no such adaptation effect occurred when the adapting sounds relied on visual information (Asian vs. Caucasian videos) to influence the accentedness of an ambiguous auditory adaptor. Collectively, the results demonstrate that visual information can affect the interpretation, but not the perception, of accented speech.

  1. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia

    OpenAIRE

    Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients additional knowledge about their tongue shapes when attempting to produce sounds that are in error. The additional feedback may assist children with childhood apraxia of speech in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and sy...

  2. The role of strength training in speech sound disorders.

    Science.gov (United States)

    Clark, Heather M

    2008-11-01

    Strengthening of the articulators is commonly used to help children improve sound production accuracy, even though the relationship between weakness and speech function remains unclear. Clinicians considering the use of strength training must weigh both the theoretical foundations and the evidence supporting this practice. Widely accepted principles of strength training are available to guide the evaluation of strength training programs. Training specificity requires that exercises closely match the targeted functional outcome. The exercises must overload the muscles beyond their typical use, and this overload must be systematically progressed over time. Finally, the strength training program must incorporate adequate time between exercise sessions to allow for recovery. The available research does not support the position that nonspeech oral motor exercises (NSOMEs) targeting increased strength is beneficial for improving speech accuracy. An example of a speech-based strengthening program is provided to illustrate how appropriate training principles could lead to more positive outcomes. A much larger body of research is needed to determine the conditions under which strength training is most appropriately applied in the treatment of childhood speech disorders.

  3. Attention fine-tunes auditory-motor processing of speech sounds.

    Science.gov (United States)

    Möttönen, Riikka; van de Ven, Gido M; Watkins, Kate E

    2014-03-12

    The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.

  4. Speech, Sound and Music Processing: Embracing Research in India

    DEFF Research Database (Denmark)

    information retrieval, sound analysis synthesis and perception and speech processing of Indian languages. The Indian focus provided many interesting topics related to the Raga, from a music theory point of view to the instruments and the specific ornamentation of Indian classical singing. Another particular......The Computer Music Modeling and Retrieval (CMMR) 2011 conference was the 8th event of this international series, and the first that took place outside Europe. Since its beginnings in 2003, this conference has been co-organized by the Laboratoire de M´ecanique et d’Acoustique in Marseille, France......, and the Department of Architecture, Design and Media Technology (ad:mt), University of Aalborg, Esbjerg, Denmark, and has taken place in France, Italy, Spain, and Denmark. Historically, CMMR offers a cross-disciplinary overview of current music information retrieval and sound modeling activities and related topics...

  5. Acoustical Characteristics of Mastication Sounds: Application of Speech Analysis Techniques

    Science.gov (United States)

    Brochetti, Denise

    Food scientists have used acoustical methods to study characteristics of mastication sounds in relation to food texture. However, a model for analysis of the sounds has not been identified, and reliability of the methods has not been reported. Therefore, speech analysis techniques were applied to mastication sounds, and variation in measures of the sounds was examined. To meet these objectives, two experiments were conducted. In the first experiment, a digital sound spectrograph generated waveforms and wideband spectrograms of sounds by 3 adult subjects (1 male, 2 females) for initial chews of food samples differing in hardness and fracturability. Acoustical characteristics were described and compared. For all sounds, formants appeared in the spectrograms, and energy occurred across a 0 to 8000-Hz range of frequencies. Bursts characterized waveforms for peanut, almond, raw carrot, ginger snap, and hard candy. Duration and amplitude of the sounds varied with the subjects. In the second experiment, the spectrograph was used to measure the duration, amplitude, and formants of sounds for the initial 2 chews of cylindrical food samples (raw carrot, teething toast) differing in diameter (1.27, 1.90, 2.54 cm). Six adult subjects (3 males, 3 females) having normal occlusions and temporomandibular joints chewed the samples between the molar teeth and with the mouth open. Ten repetitions per subject were examined for each food sample. Analysis of estimates of variation indicated an inconsistent intrasubject variation in the acoustical measures. Food type and sample diameter also affected the estimates, indicating the variable nature of mastication. Generally, intrasubject variation was greater than intersubject variation. Analysis of ranks of the data indicated that the effect of sample diameter on the acoustical measures was inconsistent and depended on the subject and type of food. If inferences are to be made concerning food texture from acoustical measures of mastication

  6. Preliteracy speech sound production skill and later literacy outcomes: a study using the Templin Archive.

    Science.gov (United States)

    Overby, Megan S; Trainin, Guy; Smit, Ann Bosma; Bernthal, John E; Nelson, Ron

    2012-01-01

    This archival study examined the relationship between the speech sound production skill of kindergarten children and literacy outcomes in Grades 1-3 in a data set where most children's vocabulary skills were within normal limits, speech therapy was not provided until 2nd grade, and phonological awareness instruction was discouraged at the time data were collected. Data were accessed from the Templin Archive (2004), and the speech sound production skill of 272 kindergartners were examined relative to literacy outcomes in 1st and 2nd grade (reading) and 3rd grade (spelling). Kindergartners in the 7th percentile for speech sound production skill scored more poorly in 1st- and 2nd-grade reading and 3rd-grade spelling than did kindergartners with average speech sound production skill; kindergartners in the 98th percentile achieved superior literacy skills compared to the mean. Phonological awareness mediated the effects of speech sound production skill on reading and spelling; vocabulary did not account for any unique variance. Speech sound disorders appear to be an overt manifestation of a complex interaction among variables influencing literacy skills, including nonlanguage cognition, vocabulary, letter knowledge, and phonological awareness. These interrelationships hold across the range of speech sound production skill, as children with superior speech sound production skill experience superior literacy outcomes.

  7. Categorization of speech sounds by Norwegian/English bilinguals

    Science.gov (United States)

    Dypvik, Audny T.; Slawinski, Elzbieta B.

    2005-04-01

    Bilinguals who learned English late in life (late bilinguals) as opposed to those who learned English early in life (early bilinguals) differ in their perception of phonemic distinctions. Age of acquisition of a second language as well as depth of immersion into English is influenced by perceptual differences of phonemic contrasts between monolinguals and bilinguals, with consequences for speech production. The phonemes /v/ and /w/ are from the same category in Norwegian, rendering them perceptually indistinguishable to the native Norwegian listener. In English, /v/ and /w/ occupy two categories. Psychoacoustic testing on this phonemic distinction in the current study will compare perceptual abilities of monolingual English and bilingual Norwegian/English listeners. Preliminary data indicates that Norwegian/English bilinguals demonstrate varying perceptual abilities for this phonemic distinction. A series of speech sounds have been generated by an articulatory synthesizer, the Tube Resonance Model, along a continuum between the postures of /v/ and /w/. They will be presented binaurally over headphones in an anechoic chamber at a sound pressure level of 75 dB. Differences in the perception of the categorical boundary between /v/ and /w/ among English monolinguals and Norwegian/English bilinguals will be further delineated.

  8. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  9. Auditory feedback perturbation in children with developmental speech sound disorders.

    Science.gov (United States)

    Terband, Hayo; van Brenk, Frits; van Doornik-van der Zee, Anniek

    2014-01-01

    Several studies indicate a close relation between auditory and speech motor functions in children with speech sound disorders (SSD). The aim of this study was to investigate the ability to compensate and adapt for perturbed auditory feedback in children with SSD compared to age-matched normally developing children. 17 normally developing children aged 4.1-8.7 years (mean=5.5, SD=1.4), and 11 children with SSD aged 3.9-7.5 years (mean=5.1, SD=1.0) participated in the study. Auditory feedback was perturbed by real-time shifting the first and second formant of the vowel /e/ during the production of CVC words in a five-step paradigm (practice/familiarization; start/baseline; ramp; hold; end/release). At the group level, the normally developing children were better able to compensate and adapt, adjusting their formant frequencies in the direction opposite to the perturbation, while the group of children with SSD followed (amplifying) the perturbation. However, large individual differences lie underneath. Furthermore, strong correlations were found between the amount of compensation and performance on oral motor movement non-word repetition tasks. Results suggested that while most children with SSD can detect incongruencies in auditory feedback and can adapt their target representations, they are unable to compensate for perturbed auditory feedback. These findings suggest that impaired auditory-motor integration may play a key role in SSD. The reader will be able to: (1) describe the potential role of auditory feedback control in developmental speech disorders (SSD); (2) identify the neural control subsystems involved in feedback based speech motor control; (3) describe the differences between compensation and adaptation for perturbed auditory feedback; (4) explain why auditory-motor integration may play a key role in SSD. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Initial consonant deletion in bilingual Spanish-English-speaking children with speech sound disorders.

    Science.gov (United States)

    Fabiano-Smith, Leah; Cuzner, Suzanne Lea

    2018-01-01

    The purpose of this study was to utilize a theoretical model of bilingual speech sound production as a framework for analyzing the speech of bilingual children with speech sound disorders. In order to distinguish speech difference from speech disorder, we examined between-language interaction on initial consonant deletion, an error pattern found cross-linguistically in the speech of children with speech sound disorders. Thirteen monolingual English-speaking and bilingual Spanish-and English-speaking preschoolers with speech sound disorders were audio-recorded during a single word picture-naming task and their recordings were phonetically transcribed. Initial consonant deletion errors were examined both quantitatively and qualitatively. An analysis of cross-linguistic effects and an analysis of phonemic complexity were performed. Monolingual English-speaking children exhibited initial consonant deletion at a significantly lower rate than bilingual children in their Spanish productions; however, no other quantitative differences were found across groups or languages. Qualitative differences yielded between-language interaction in the error patterns of bilingual children. Phonemic complexity appeared to play a role in initial consonant deletion. Evidence from the speech of bilingual children with speech sound disorders supports analysing bilingual speech using a cross-linguistic framework. Both theoretical and clinical implications are discussed.

  11. Multilingual Aspects of Speech Sound Disorders in Children. Communication Disorders across Languages

    Science.gov (United States)

    McLeod, Sharynne; Goldstein, Brian

    2012-01-01

    Multilingual Aspects of Speech Sound Disorders in Children explores both multilingual and multicultural aspects of children with speech sound disorders. The 30 chapters have been written by 44 authors from 16 different countries about 112 languages and dialects. The book is designed to translate research into clinical practice. It is divided into…

  12. Dynamic Assessment of Phonological Awareness for Children with Speech Sound Disorders

    Science.gov (United States)

    Gillam, Sandra Laing; Ford, Mikenzi Bentley

    2012-01-01

    The current study was designed to examine the relationships between performance on a nonverbal phoneme deletion task administered in a dynamic assessment format with performance on measures of phoneme deletion, word-level reading, and speech sound production that required verbal responses for school-age children with speech sound disorders (SSDs).…

  13. Neural Correlates of Phonological Processing in Speech Sound Disorder: A Functional Magnetic Resonance Imaging Study

    Science.gov (United States)

    Tkach, Jean A.; Chen, Xu; Freebairn, Lisa A.; Schmithorst, Vincent J.; Holland, Scott K.; Lewis, Barbara A.

    2011-01-01

    Speech sound disorders (SSD) are the largest group of communication disorders observed in children. One explanation for these disorders is that children with SSD fail to form stable phonological representations when acquiring the speech sound system of their language due to poor phonological memory (PM). The goal of this study was to examine PM in…

  14. The Prevalence of Stuttering, Voice, and Speech-Sound Disorders in Primary School Students in Australia

    Science.gov (United States)

    McKinnon, David H.; McLeod, Sharynne; Reilly, Sheena

    2007-01-01

    Purpose: The aims of this study were threefold: to report teachers' estimates of the prevalence of speech disorders (specifically, stuttering, voice, and speech-sound disorders); to consider correspondence between the prevalence of speech disorders and gender, grade level, and socioeconomic status; and to describe the level of support provided to…

  15. Minimal Pair Distinctions and Intelligibility in Preschool Children with and without Speech Sound Disorders

    Science.gov (United States)

    Hodge, Megan M.; Gotzke, Carrie L.

    2011-01-01

    Listeners' identification of young children's productions of minimally contrastive words and predictive relationships between accurately identified words and intelligibility scores obtained from a 100-word spontaneous speech sample were determined for 36 children with typically developing speech (TDS) and 36 children with speech sound disorders…

  16. Speech sound disorder at 4 years: prevalence, comorbidities, and predictors in a community cohort of children.

    Science.gov (United States)

    Eadie, Patricia; Morgan, Angela; Ukoumunne, Obioha C; Ttofari Eecen, Kyriaki; Wake, Melissa; Reilly, Sheena

    2015-06-01

    The epidemiology of preschool speech sound disorder is poorly understood. Our aims were to determine: the prevalence of idiopathic speech sound disorder; the comorbidity of speech sound disorder with language and pre-literacy difficulties; and the factors contributing to speech outcome at 4 years. One thousand four hundred and ninety-four participants from an Australian longitudinal cohort completed speech, language, and pre-literacy assessments at 4 years. Prevalence of speech sound disorder (SSD) was defined by standard score performance of ≤79 on a speech assessment. Logistic regression examined predictors of SSD within four domains: child and family; parent-reported speech; cognitive-linguistic; and parent-reported motor skills. At 4 years the prevalence of speech disorder in an Australian cohort was 3.4%. Comorbidity with SSD was 40.8% for language disorder and 20.8% for poor pre-literacy skills. Sex, maternal vocabulary, socio-economic status, and family history of speech and language difficulties predicted SSD, as did 2-year speech, language, and motor skills. Together these variables provided good discrimination of SSD (area under the curve=0.78). This is the first epidemiological study to demonstrate prevalence of SSD at 4 years of age that was consistent with previous clinical studies. Early detection of SSD at 4 years should focus on family variables and speech, language, and motor skills measured at 2 years. © 2014 Mac Keith Press.

  17. Preschool Speech Error Patterns Predict Articulation and Phonological Awareness Outcomes in Children with Histories of Speech Sound Disorders

    Science.gov (United States)

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2013-01-01

    Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…

  18. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds.

    Science.gov (United States)

    Engineer, Crystal T; Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P

    Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. International aspirations for speech-language pathologists' practice with multilingual children with speech sound disorders: development of a position paper.

    Science.gov (United States)

    McLeod, Sharynne; Verdon, Sarah; Bowen, Caroline

    2013-01-01

    A major challenge for the speech-language pathology profession in many cultures is to address the mismatch between the "linguistic homogeneity of the speech-language pathology profession and the linguistic diversity of its clientele" (Caesar & Kohler, 2007, p. 198). This paper outlines the development of the Multilingual Children with Speech Sound Disorders: Position Paper created to guide speech-language pathologists' (SLPs') facilitation of multilingual children's speech. An international expert panel was assembled comprising 57 researchers (SLPs, linguists, phoneticians, and speech scientists) with knowledge about multilingual children's speech, or children with speech sound disorders. Combined, they had worked in 33 countries and used 26 languages in professional practice. Fourteen panel members met for a one-day workshop to identify key points for inclusion in the position paper. Subsequently, 42 additional panel members participated online to contribute to drafts of the position paper. A thematic analysis was undertaken of the major areas of discussion using two data sources: (a) face-to-face workshop transcript (133 pages) and (b) online discussion artifacts (104 pages). Finally, a moderator with international expertise in working with children with speech sound disorders facilitated the incorporation of the panel's recommendations. The following themes were identified: definitions, scope, framework, evidence, challenges, practices, and consideration of a multilingual audience. The resulting position paper contains guidelines for providing services to multilingual children with speech sound disorders (http://www.csu.edu.au/research/multilingual-speech/position-paper). The paper is structured using the International Classification of Functioning, Disability and Health: Children and Youth Version (World Health Organization, 2007) and incorporates recommendations for (a) children and families, (b) SLPs' assessment and intervention, (c) SLPs' professional

  20. Polysyllable Speech Accuracy and Predictors of Later Literacy Development in Preschool Children With Speech Sound Disorders.

    Science.gov (United States)

    Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen

    2017-07-12

    The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables-words of three or more syllables-are important to consider because unlike monosyllables, polysyllables have been associated with phonological processing and literacy difficulties in school-aged children. They therefore have the potential to help identify preschoolers most at risk of future literacy difficulties. Participants were 93 preschool children with SSD from the Sound Start Study. Participants completed the Polysyllable Preschool Test (Baker, 2013) as well as phonological processing, receptive vocabulary, and print knowledge tasks. Cluster analysis was completed, and 2 clusters were identified: low polysyllable accuracy and moderate polysyllable accuracy. The clusters were significantly different based on 2 measures of phonological awareness and measures of receptive vocabulary, rapid naming, and digit span. The clusters were not significantly different on sound matching accuracy or letter, sound, or print concept knowledge. The participants' poor performance on print knowledge tasks suggested that as a group, they were at risk of literacy difficulties but that there was a cluster of participants at greater risk-those with both low polysyllable accuracy and poor phonological processing.

  1. Sounding Black or White: priming identity and biracial speech.

    Science.gov (United States)

    Gaither, Sarah E; Cohen-Goldberg, Ariel M; Gidney, Calvin L; Maddox, Keith B; Gidney, Calvin L; Gidney, Calvin L

    2015-01-01

    Research has shown that priming one's racial identity can alter a biracial individuals' social behavior, but can such priming also influence their speech? Language is often used as a marker of one's social group membership and studies have shown that social context can affect the style of language that a person chooses to use, but this work has yet to be extended to the biracial population. Audio clips were extracted from a previous study involving biracial Black/White participants who had either their Black or White racial identity primed. Condition-blind coders rated Black-primed biracial participants as sounding significantly more Black and White-primed biracial participants as sounding significantly more White, both when listening to whole (Study 1a) and thin-sliced (Study 1b) clips. Further linguistic analyses (Studies 2a-c) were inconclusive regarding the features that differed between the two groups. Future directions regarding the need to investigate the intersections between social identity priming and language behavior with a biracial lens are discussed.

  2. A pilot exploration of speech sound disorder intervention delivered by telehealth to school-age children.

    Science.gov (United States)

    Grogan-Johnson, Susan; Gabel, Rodney M; Taylor, Jacquelyn; Rowan, Lynne E; Alvares, Robin; Schenker, Jason

    2011-01-01

    This article describes a school-based telehealth service delivery model and reports outcomes made by school-age students with speech sound disorders in a rural Ohio school district. Speech therapy using computer-based speech sound intervention materials was provided either by live interactive videoconferencing (telehealth), or conventional side-by-side intervention. Progress was measured using pre- and post-intervention scores on the Goldman Fristoe Test of Articulation-2 (Goldman & Fristoe, 2002). Students in both service delivery models made significant improvements in speech sound production, with students in the telehealth condition demonstrating greater mastery of their Individual Education Plan (IEP) goals. Live interactive videoconferencing thus appears to be a viable method for delivering intervention for speech sound disorders to children in a rural, public school setting.

  3. Is the Speech Transmission Index (STI) a robust measure of sound system speech intelligibility performance?

    Science.gov (United States)

    Mapp, Peter

    2002-11-01

    Although RaSTI is a good indicator of the speech intelligibility capability of auditoria and similar spaces, during the past 2-3 years it has been shown that RaSTI is not a robust predictor of sound system intelligibility performance. Instead, it is now recommended, within both national and international codes and standards, that full STI measurement and analysis be employed. However, new research is reported, that indicates that STI is not as flawless, nor robust as many believe. The paper highlights a number of potential error mechanisms. It is shown that the measurement technique and signal excitation stimulus can have a significant effect on the overall result and accuracy, particularly where DSP-based equipment is employed. It is also shown that in its current state of development, STI is not capable of appropriately accounting for a number of fundamental speech and system attributes, including typical sound system frequency response variations and anomalies. This is particularly shown to be the case when a system is operating under reverberant conditions. Comparisons between actual system measurements and corresponding word score data are reported where errors of up to 50 implications for VA and PA system performance verification will be discussed.

  4. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus)

    Science.gov (United States)

    Flaherty, Mary; Dent, Micheal L.; Sawusch, James R.

    2017-01-01

    The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with “d” or “t” and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal. PMID:28562597

  5. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus.

    Directory of Open Access Journals (Sweden)

    Mary Flaherty

    Full Text Available The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated, Passive speech exposure (regular exposure to human speech, and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  6. Emergence of category-level sensitivities in non-native speech sound learning

    Directory of Open Access Journals (Sweden)

    Emily eMyers

    2014-08-01

    Full Text Available Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

  7. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    Science.gov (United States)

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Mommy is only happy! Dutch mothers' realisation of speech sounds in infant-directed speech expresses emotion, not didactic intent.

    Science.gov (United States)

    Benders, Titia

    2013-12-01

    Exaggeration of the vowel space in infant-directed speech (IDS) is well documented for English, but not consistently replicated in other languages or for other speech-sound contrasts. A second attested, but less discussed, pattern of change in IDS is an overall rise of the formant frequencies, which may reflect an affective speaking style. The present study investigates longitudinally how Dutch mothers change their corner vowels, voiceless fricatives, and pitch when speaking to their infant at 11 and 15 months of age. In comparison to adult-directed speech (ADS), Dutch IDS has a smaller vowel space, higher second and third formant frequencies in the vowels, and a higher spectral frequency in the fricatives. The formants of the vowels and spectral frequency of the fricatives are raised more strongly for infants at 11 than at 15 months, while the pitch is more extreme in IDS to 15-month olds. These results show that enhanced positive affect is the main factor influencing Dutch mothers' realisation of speech sounds in IDS, especially to younger infants. This study provides evidence that mothers' expression of emotion in IDS can influence the realisation of speech sounds, and that the loss or gain of speech clarity may be secondary effects of affect. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. The influence of (central auditory processing disorder in speech sound disorders

    Directory of Open Access Journals (Sweden)

    Tatiane Faria Barrozo

    2016-02-01

    Full Text Available ABSTRACT INTRODUCTION: Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. OBJECTIVE: To study phonological measures and (central auditory processing of children with speech sound disorder. METHODS: Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. RESULTS: The group with (central auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. CONCLUSION: The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder.

  10. The influence of (central) auditory processing disorder in speech sound disorders.

    Science.gov (United States)

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein

    2016-01-01

    Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  11. Evidence-Based Practice for Children with Speech Sound Disorders: Part 1 Narrative Review

    Science.gov (United States)

    Baker, Elise; McLeod, Sharynne

    2011-01-01

    Purpose: This article provides a comprehensive narrative review of intervention studies for children with speech sound disorders (SSD). Its companion paper (Baker & McLeod, 2011) provides a tutorial and clinical example of how speech-language pathologists (SLPs) can engage in evidence-based practice (EBP) for this clinical population. Method:…

  12. Profile of Australian Preschool Children with Speech Sound Disorders at Risk for Literacy Difficulties

    Science.gov (United States)

    McLeod, Sharynne; Crowe, Kathryn; Masso, Sarah; Baker, Elise; McCormack, Jane; Wren, Yvonne; Roulstone, Susan; Howland, Charlotte

    2017-01-01

    Speech sound disorders are a common communication difficulty in preschool children. Teachers indicate difficulty identifying and supporting these children. The aim of this research was to describe speech and language characteristics of children identified by their parents and/or teachers as having possible communication concerns. 275 Australian 4-…

  13. Phonological Encoding in Speech-Sound Disorder: Evidence from a Cross-Modal Priming Experiment

    Science.gov (United States)

    Munson, Benjamin; Krause, Miriam O. P.

    2017-01-01

    Background: Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. Aims: To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to…

  14. Speech Sound Disorders in Preschool Children: Correspondence between Clinical Diagnosis and Teacher and Parent Report

    Science.gov (United States)

    Harrison, Linda J.; McLeod, Sharynne; McAllister, Lindy; McCormack, Jane

    2017-01-01

    This study sought to assess the level of correspondence between parent and teacher report of concern about young children's speech and specialist assessment of speech sound disorders (SSD). A sample of 157 children aged 4-5 years was recruited in preschools and long day care centres in Victoria and New South Wales (NSW). SSD was assessed…

  15. Evidence-Based Practice for Children with Speech Sound Disorders: Part 2 Application to Clinical Practice

    Science.gov (United States)

    Baker, Elise; McLeod, Sharynne

    2011-01-01

    Purpose: This article provides both a tutorial and a clinical example of how speech-language pathologists (SLPs) can conduct evidence-based practice (EBP) when working with children with speech sound disorders (SSDs). It is a companion paper to the narrative review of 134 intervention studies for children who have an SSD (Baker & McLeod, 2011).…

  16. Parental Beliefs and Experiences Regarding Involvement in Intervention for Their Child with Speech Sound Disorder

    Science.gov (United States)

    Watts Pappas, Nicole; McAllister, Lindy; McLeod, Sharynne

    2016-01-01

    Parental beliefs and experiences regarding involvement in speech intervention for their child with mild to moderate speech sound disorder (SSD) were explored using multiple, sequential interviews conducted during a course of treatment. Twenty-one interviews were conducted with seven parents of six children with SSD: (1) after their child's initial…

  17. Do Irrelevant Sounds Impair the Maintenance of All Characteristics of Speech in Memory?

    Science.gov (United States)

    Gabriel, D.; Gaudrain, E.; Lebrun-Guillaud, G.; Sheppard, F.; Tomescu, I. M.; Schnider, A.

    2012-01-01

    Several studies have shown that maintaining in memory some attributes of speech, such as the content or pitch of an interlocutor's message, is markedly reduced in the presence of background sounds made of spectrotemporal variations. However, experimental paradigms showing this interference have only focused on one attribute of speech at a time,…

  18. Balancing speech intelligibility versus sound exposure in selection of personal hearing protection equipment for Chinook aircrews

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Rots, G.

    2001-01-01

    Background: Aircrews are often exposed to high ambient sound levels, especially in military aviation. Since long-term exposure to such noise may cause hearing damage, selection of adequate hearing protective devices is crucial. Such devices also affect speech intelligibility. When speech

  19. Profile of Australian preschool children with speech sound disorders at risk for literacy difficulties

    OpenAIRE

    McLeod, S.; Crowe, K.; Masso, S.; Baker, E.; McCormack, J.; Wren, Y.; Roulstone, S.; Howland, C.

    2017-01-01

    Background: Speech sound disorders are a common communication difficulty in preschool children. Teachers indicate difficulty identifying and supporting these children.\\ud \\ud Aim: To describe speech and language characteristics of children identified by their parents and/or teachers as having possible communication concerns.\\ud \\ud Method: 275 Australian 4- to 5-year-old children from 45 preschools whose parents and teachers were concerned about their talking participated in speech-language p...

  20. The Emergence of the Allophonic Perception of Unfamiliar Speech Sounds: The Effects of Contextual Distribution and Phonetic Naturalness

    Science.gov (United States)

    Noguchi, Masaki; Hudson Kam, Carla L.

    2018-01-01

    In human languages, different speech sounds can be contextual variants of a single phoneme, called allophones. Learning which sounds are allophones is an integral part of the acquisition of phonemes. Whether given sounds are separate phonemes or allophones in a listener's language affects speech perception. Listeners tend to be less sensitive to…

  1. A Comparison of Word Lexicality in the Treatment of Speech Sound Disorders

    Science.gov (United States)

    Cummings, Alycia E.; Barlow, Jessica A.

    2011-01-01

    The goal of this research programme was to evaluate the role of word lexicality in effecting phonological change in children's sound systems. Four children with functional speech sound disorders (SSDs) were enrolled in an across-subjects multiple baseline single-subject design; two were treated using high-frequency real words (RWs) and two were…

  2. Degraded speech sound processing in a rat model of fragile X syndrome.

    Science.gov (United States)

    Engineer, Crystal T; Centanni, Tracy M; Im, Kwok W; Rahebi, Kimiya C; Buell, Elizabeth P; Kilgard, Michael P

    2014-05-20

    Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Patterns and risk factors associated with speech sounds and language disorders in pakistan

    International Nuclear Information System (INIS)

    Arshad, H.; Ghayas, M.S.; Madiha, A.

    2013-01-01

    To observe the patterns of speech sounds and language disorders. To find out associated risk factors of speech sounds and language disorders. Background: Communication is the very essence of modern society. Communication disorders impacts quality of life. Patterns and factors associated with speech sounds and language impairments were explored. The association was seen with different environmental factors. Methodology: The patients included in the study were 200 whose age ranged between two and sixteen years presented in speech therapy clinic OPD Mayo Hospital. A cross-sectional survey questionnaire assessed the patient's bio data, socioeconomic background, family history of communication disorders and bilingualism. It was a descriptive study and was conducted through cross-sectional survey. Data was analysed by SPSS version 16. Results: Results reveal Language disorders were relatively more prevalent in males than those of speech sound disorders. Bilingualism was found as having insignificant effect on these disorders. It was concluded from this study that the socioeconomic status and family history were significant risk factors. Conclusion: Gender, socioeconomic status, family history can play as risk for developing speech sounds and language disorders. There is a grave need to understand patterns of communication disorders in the light of Pakistani society and culture. It is recommended to conduct further studies to determine risk factors and patterns of these impairments. (author)

  4. Functional Brain Activation Differences in School-Age Children with Speech Sound Errors: Speech and Print Processing

    Science.gov (United States)

    Preston, Jonathan L.; Felsenfeld, Susan; Frost, Stephen J.; Mencl, W. Einar; Fulbright, Robert K.; Grigorenko, Elena L.; Landi, Nicole; Seki, Ayumi; Pugh, Kenneth R.

    2012-01-01

    Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8;6[years;months] through 10;10, with 17 matched controls. Results: When…

  5. Speech sound discrimination training improves auditory cortex responses in a rat model of autism

    Directory of Open Access Journals (Sweden)

    Crystal T Engineer

    2014-08-01

    Full Text Available Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes.

  6. Cross-Modal Correspondence between Brightness and Chinese Speech Sound with Aspiration

    Directory of Open Access Journals (Sweden)

    Sachiko Hirata

    2011-10-01

    Full Text Available Phonetic symbolism is the phenomenon of speech sounds evoking images based on sensory experiences; it is often discussed with cross-modal correspondence. By using Garner's task, Hirata, Kita, and Ukita (2009 showed the cross-modal congruence between brightness and voiced/voiceless consonants in Japanese speech sound, which is known as phonetic symbolism. In the present study, we examined the effect of the meaning of mimetics (lexical words whose sound reflects its meaning, like “ding-dong” in Japanese language on the cross-modal correspondence. We conducted an experiment with Chinese speech sounds with or without aspiration using Chinese people. Chinese vocabulary also contains mimetics but the existence of aspiration doesn't relate to the meaning of Chinese mimetics. As a result, Chinese speech sounds with aspiration, which resemble voiceless consonants, were matched with white color, whereas those without aspiration were matched with black. This result is identical to its pattern in Japanese people and consequently suggests that cross-modal correspondence occurs without the effect of the meaning of mimetics. The problem that whether these cross-modal correspondences are purely based on physical properties of speech sound or affected from phonetic properties remains for further study.

  7. Effects of caffeine treatment for apnea of prematurity on cortical speech sound differentiation in preterm infants

    Science.gov (United States)

    Maitre, Nathalie L.; Chan, Jeremy; Stark, Ann R.; Lambert, Warren E.; Aschner, Judy L.; Key, Alexandra P.

    2014-01-01

    Caffeine, standard treatment for apnea of prematurity, improves brainstem auditory processing. We hypothesized that caffeine also improves cortical differentiation of complex speech sounds. We used event-related potential methodology to measure responses to speech-sound contrasts in 45 intensive care neonates, stratified by cumulative exposure as no-, low-, and high-caffeine groups. Sound differentiation in the low-caffeine group and near-term no-caffeine infants was similar with repeated measures ANOVA controlling for gestational and postnatal age. However, a generalized estimating equation approach demonstrated that, at equivalent postnatal age, differentiation was reduced in the high-caffeine (gestational age 25 weeks) compared to the low-caffeine group (gestational age 28 weeks), reflecting the importance of maturity at birth (Z=2.77, papnea of prematurity cannot fully compensate for the effects of brain immaturity on speech sound processing. PMID:24939976

  8. Sound frequency affects speech emotion perception: results from congenital amusia.

    Science.gov (United States)

    Lolli, Sydney L; Lewenstein, Ari D; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

  9. Sound frequency affects speech emotion perception: Results from congenital amusia

    Directory of Open Access Journals (Sweden)

    Sydney eLolli

    2015-09-01

    Full Text Available Congenital amusics, or tone-deaf individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying band-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody (MBEP were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under band-pass and unfiltered speech conditions. Results showed a significant correlation between pitch discrimination threshold and emotion identification accuracy for band-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold > 16 Hz performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between band-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation.

  10. Development of word list in Hindi for speech sounds to work on articulation errors for children with hearing impairment

    OpenAIRE

    Rajeev Ranjan; Arun Banik

    2014-01-01

    Background: Children with Hearing Impairment often experienced inability to recognize speech sounds, delay in language acquisition, educational disadvantage, social isolation and difficulties to communicate. The study was aimed to develop the word lists in Hindi for speech sounds to work on articulation errors for children with Hearing Impairment in the age range of 0-6 years. Methods: The different speech sounds were selected as per phonological developmental stage of the child. The selec...

  11. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Directory of Open Access Journals (Sweden)

    Wendy Doubé

    2018-04-01

    Full Text Available Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  12. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial.

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  13. Ultrasound Images of the Tongue: A Tutorial for Assessment and Remediation of Speech Sound Errors

    Science.gov (United States)

    Preston, Jonathan L.; McAllister Byun, Tara; Boyce, Suzanne E.; Hamilton, Sarah; Tiede, Mark; Phillips, Emily; Rivera-Campos, Ahmed; Whalen, Douglas H.

    2017-01-01

    Diagnostic ultrasound imaging has been a common tool in medical practice for several decades. It provides a safe and effective method for imaging structures internal to the body. There has been a recent increase in the use of ultrasound technology to visualize the shape and movements of the tongue during speech, both in typical speakers and in clinical populations. Ultrasound imaging of speech has greatly expanded our understanding of how sounds articulated with the tongue (lingual sounds) are produced. Such information can be particularly valuable for speech-language pathologists. Among other advantages, ultrasound images can be used during speech therapy to provide (1) illustrative models of typical (i.e. "correct") tongue configurations for speech sounds, and (2) a source of insight into the articulatory nature of deviant productions. The images can also be used as an additional source of feedback for clinical populations learning to distinguish their better productions from their incorrect productions, en route to establishing more effective articulatory habits. Ultrasound feedback is increasingly used by scientists and clinicians as both the expertise of the users increases and as the expense of the equipment declines. In this tutorial, procedures are presented for collecting ultrasound images of the tongue in a clinical context. We illustrate these procedures in an extended example featuring one common error sound, American English /r/. Images of correct and distorted /r/ are used to demonstrate (1) how to interpret ultrasound images, (2) how to assess tongue shape during production of speech sounds, (3), how to categorize tongue shape errors, and (4), how to provide visual feedback to elicit a more appropriate and functional tongue shape. We present a sample protocol for using real-time ultrasound images of the tongue for visual feedback to remediate speech sound errors. Additionally, example data are shown to illustrate outcomes with the procedure. PMID

  14. Tongue contour for /s/ and /∫/ in children with speech sound disorder.

    Science.gov (United States)

    Wertzner, Haydée Fiszbein; Francisco, Danira Tavares; Pagan-Neves, Luciana de Oliveira

    2014-01-01

    To describe the tongue shape for /s/ and /∫/ sounds in three different groups of children with and without speech sound disorder. The six participants were divided into three groups: Group 1--two typically developing children, Group 2--two children with speech sound disorder presenting any other phonological processes but not the ones involving the production of the /∫/ and Group 3--two children with speech sound disorder presenting any phonological processes associated to the presence of the phonological process of palatal fronting (these two children produced /∫/ as /s/) aged between 5 and 8 years old, all speakers of Brazilian Portuguese. The data were the words /'∫avi/ (key) and /'sapu/ (frog). Tongue contour was individually traced for the five productions of each target word. The analysis of the tongue contour pointed to evidences that both /s/ and /∫/ were produced using distinct tongue contours for G1 and G2. The production of these two groups was more stable than G3. The tongue contour for /s/ and /∫/ from the children in G3 was similar, indicating that their production was undifferentiated. The use of the ultrasound applied to the speech analysis was effective to confirm the perceptual analysis of the sound made ​​by the speech-language pathologist.

  15. The influence of meaning on the perception of speech sounds.

    Science.gov (United States)

    Kazanina, Nina; Phillips, Colin; Idsardi, William

    2006-07-25

    As part of knowledge of language, an adult speaker possesses information on which sounds are used in the language and on the distribution of these sounds in a multidimensional acoustic space. However, a speaker must know not only the sound categories of his language but also the functional significance of these categories, in particular, which sound contrasts are relevant for storing words in memory and which sound contrasts are not. Using magnetoencephalographic brain recordings with speakers of Russian and Korean, we demonstrate that a speaker's perceptual space, as reflected in early auditory brain responses, is shaped not only by bottom-up analysis of the distribution of sounds in his language but also by more abstract analysis of the functional significance of those sounds.

  16. Speech sound articulation abilities of preschool-age children who stutter.

    Science.gov (United States)

    Clark, Chagit E; Conture, Edward G; Walden, Tedra A; Lambert, Warren E

    2013-12-01

    The purpose of this study was to assess the association between speech sound articulation and childhood stuttering in a relatively large sample of preschool-age children who do and do not stutter, using the Goldman-Fristoe Test of Articulation-2 (GFTA-2; Goldman & Fristoe, 2000). Participants included 277 preschool-age children who do (CWS; n=128, 101 males) and do not stutter (CWNS; n=149, 76 males). Generalized estimating equations (GEE) were performed to assess between-group (CWS versus CWNS) differences on the GFTA-2. Additionally, within-group correlations were performed to explore the relation between CWS' speech sound articulation abilities and their stuttering frequency and severity, as well as their sound prolongation index (SPI; Schwartz & Conture, 1988). No significant differences were found between the articulation scores of preschool-age CWS and CWNS. However, there was a small gender effect for the 5-year-old age group, with girls generally exhibiting better articulation scores than boys. Additional findings indicated no relation between CWS' speech sound articulation abilities and their stuttering frequency, severity, or SPI. Findings suggest no apparent association between speech sound articulation-as measured by one standardized assessment (GFTA-2)-and childhood stuttering for this sample of preschool-age children (N=277). After reading this article, the reader will be able to: (1) discuss salient issues in the articulation literature relative to children who stutter; (2) compare/contrast the present study's methodologies and main findings to those of previous studies that investigated the association between childhood stuttering and speech sound articulation; (3) identify future research needs relative to the association between childhood stuttering and speech sound development; (4) replicate the present study's methodology to expand this body of knowledge. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. The effects of visual material and temporal synchrony on the processing of letters and speech sounds.

    Science.gov (United States)

    Mittag, Maria; Takegata, Rika; Kujala, Teija

    2011-06-01

    Associating letters with speech sounds is essential for reading skill acquisition. In the current study, we aimed at determining the effects of different types of visual material and temporal synchrony on the integration of letters and speech sounds. To this end, we recorded the mismatch negativity (MMN), an index of automatic change detection in the brain, from literate adults. Subjects were presented with auditory consonant-vowel syllable stimuli together with visual stimuli, which were either written syllables or scrambled pictures of the written syllables. The visual stimuli were presented in half of the blocks synchronously with the auditory stimuli and in the other half 200 ms before the auditory stimuli. The auditory stimuli were consonant, vowel or vowel length changes, or changes in syllable frequency or intensity presented by using the multi-feature paradigm. Changes in the auditory stimuli elicited MMNs in all conditions. MMN amplitudes for the consonant and frequency changes were generally larger for the sounds presented with written syllables than with scrambled syllables. Time delay diminished the MMN amplitude for all deviants. The results suggest that speech sound processing is modulated when the sounds are presented with letters versus non-linguistic visual stimuli, and further, that the integration of letters and speech sounds seems to be dependent on precise temporal alignment. Moreover, the results indicate that with our paradigm, a variety of parameters relevant and irrelevant for reading can be tested within one experiment.

  18. Lexical and phonological variability in preschool children with speech sound disorder.

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A; Lewis, Kerry E

    2014-02-01

    The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.

  19. The sound sensation of apical electric stimulation in cochlear implant recipients with contralateral residual hearing.

    Directory of Open Access Journals (Sweden)

    Diane S Lazard

    Full Text Available BACKGROUND: Studies using vocoders as acoustic simulators of cochlear implants have generally focused on simulation of speech understanding, gender recognition, or music appreciation. The aim of the present experiment was to study the auditory sensation perceived by cochlear implant (CI recipients with steady electrical stimulation on the most-apical electrode. METHODOLOGY/PRINCIPAL FINDINGS: Five unilateral CI users with contralateral residual hearing were asked to vary the parameters of an acoustic signal played to the non-implanted ear, in order to match its sensation to that of the electric stimulus. They also provided a rating of similarity between each acoustic sound they selected and the electric stimulus. On average across subjects, the sound rated as most similar was a complex signal with a concentration of energy around 523 Hz. This sound was inharmonic in 3 out of 5 subjects with a moderate, progressive increase in the spacing between the frequency components. CONCLUSIONS/SIGNIFICANCE: For these subjects, the sound sensation created by steady electric stimulation on the most-apical electrode was neither a white noise nor a pure tone, but a complex signal with a progressive increase in the spacing between the frequency components in 3 out of 5 subjects. Knowing whether the inharmonic nature of the sound was related to the fact that the non-implanted ear was impaired has to be explored in single-sided deafened patients with a contralateral CI. These results may be used in the future to better understand peripheral and central auditory processing in relation to cochlear implants.

  20. Application of a Motor Learning Treatment for Speech Sound Disorders in Small Groups.

    Science.gov (United States)

    Skelton, Steven L; Richard, Jennifer T

    2016-06-01

    Speech sound treatment in the public schools is often conducted in small groups, but there are minimal data on the efficacy of group treatment. This study evaluated the efficacy of a motor learning-based treatment (Concurrent Treatment) provided to elementary-school students in small groups. Concurrent Treatment incorporates the randomized sequencing of various practice tasks (e.g., words, sentences, or storytelling) and can result in rapid speech sound acquisition during individual treatment settings. Twenty-eight 6- to 9-year-old children participated in a randomized pretest-posttest control group design. The experimental group received Concurrent Treatment, while the control group received treatment (if needed) after the study. Participants in the experimental group acquired their target speech sounds within 40 30-minute sessions in groups of up to four participants (effect size, d = 1.31). © The Author(s) 2016.

  1. Office noise: Can headphones and masking sound attenuate distraction by background speech?

    Science.gov (United States)

    Jahncke, Helena; Björkeholm, Patrik; Marsh, John E; Odelius, Johan; Sörqvist, Patrik

    2016-11-22

    Background speech is one of the most disturbing noise sources at shared workplaces in terms of both annoyance and performance-related disruption. Therefore, it is important to identify techniques that can efficiently protect performance against distraction. It is also important that the techniques are perceived as satisfactory and are subjectively evaluated as effective in their capacity to reduce distraction. The aim of the current study was to compare three methods of attenuating distraction from background speech: masking a background voice with nature sound through headphones, masking a background voice with other voices through headphones and merely wearing headphones (without masking) as a way to attenuate the background sound. Quiet was deployed as a baseline condition. Thirty students participated in an experiment employing a repeated measures design. Performance (serial short-term memory) was impaired by background speech (1 voice), but this impairment was attenuated when the speech was masked - and in particular when it was masked by nature sound. Furthermore, perceived workload was lowest in the quiet condition and significantly higher in all other sound conditions. Notably, the headphones tested as a sound-attenuating device (i.e. without masking) did not protect against the effects of background speech on performance and subjective work load. Nature sound was the only masking condition that worked as a protector of performance, at least in the context of the serial recall task. However, despite the attenuation of distraction by nature sound, perceived workload was still high - suggesting that it is difficult to find a masker that is both effective and perceived as satisfactory.

  2. Working memory span in Persian-speaking children with speech sound disorders and normal speech development.

    Science.gov (United States)

    Afshar, Mohamad Reza; Ghorbani, Ali; Rashedi, Vahid; Jalilevand, Nahid; Kamali, Mohamad

    2017-10-01

    The aim of this study was to compare working memory span in Persian-speaking preschool children with speech sound disorder (SSD) and their typically speaking peers. Additionally, the study aimed to examine Non-Word Repetition (NWR), Forward Digit Span (FDS) and Backward Digit Span (BDS) in four groups of children with varying severity levels of SSD. The participants in this study comprised 35 children with SSD and 35 typically developing (TD) children -matched for age and sex-as a control group. The participants were between the age range of 48 and 72 months. Two components of working memory including phonological loop and central executive were compared between two groups. We used two tasks (NWR and FDS) to assess phonological loop component, and one task (BDS) to assess central executive component. Percentage of correct consonants (PCC) was used to calculate the severity of SSD. Significant differences were observed between the two groups in all tasks that assess working memory (p memory between the various severity groups indicated significant differences between different severities of both NWR and FDS tasks among the SSD children (p  0.05). The result showed that PCC scores in TD children were associated with NWR (p  0.05). The working memory skills were weaker in SSD children, in comparison to TD children. In addition, children with varying levels of severity of SSD differed in terms of NWR and FSD, but not BDS. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Reconstruction of normal sounding speech for laryngectomy patients through a modified CELP codec.

    Science.gov (United States)

    Sharifzadeh, Hamid Reza; McLoughlin, Ian V; Ahmadi, Farzaneh

    2010-10-01

    Whispered speech can be useful for quiet and private communication, and is the primary means of unaided spoken communication for many people experiencing voice-box deficiencies. Patients who have undergone partial or full laryngectomy are typically unable to speak anything more than hoarse whispers, without the aid of prostheses or specialized speaking techniques. Each of the current prostheses and rehabilitative methods for post-laryngectomized patients (primarily oesophageal speech, tracheo-esophageal puncture, and electrolarynx) have particular disadvantages, prompting new work on nonsurgical, noninvasive alternative solutions. One such solution, described in this paper, combines whisper signal analysis with direct formant insertion and speech modification located outside the vocal tract. This approach allows laryngectomy patients to regain their ability to speak with a more natural voice than alternative methods, by whispering into an external prosthesis, which then, recreates and outputs natural-sounding speech. It relies on the observation that while the pitch-generation mechanism of laryngectomy patients is damaged or unusable, the remaining components of the speech production apparatus may be largely unaffected. This paper presents analysis and reconstruction methods designed for the prosthesis, and demonstrates their ability to obtain natural-sounding speech from the whisper-speech signal using an external analysis-by-synthesis processing framework.

  4. Task-irrelevant visual letters interact with the processing of speech sounds in heteromodal and unimodal cortex

    NARCIS (Netherlands)

    Blau, Vera C; van Atteveldt, Nienke; Formisano, Elia; Goebel, Rainer; Blomert, Leo

    Letters and speech sounds are the basic units of correspondence between spoken and written language. Associating auditory information of speech sounds with visual information of letters is critical for learning to read; however, the neural mechanisms underlying this association remain poorly

  5. Fine motor function and oral-motor imitation skills in preschool-age children with speech-sound disorders.

    Science.gov (United States)

    Newmeyer, Amy J; Grether, Sandra; Grasha, Carol; White, Jaye; Akers, Rachel; Aylward, Christa; Ishikawa, Keiko; Degrauw, Ton

    2007-09-01

    Preschool-aged children with speech-sound disorders may be at risk for associated deficits in fine motor function. The objectives of this study were 2-fold: (1) to determine whether abnormalities in fine motor function could be detected in 2- to 5-year-old children with speech-sound disorders and (2) to determine whether there was a correlation between abnormal oral-motor imitation skills and abnormal fine motor function. Thirty-two children with speech-sound disorders (6 female, 26 male) were prospectively evaluated from July 2003 to July 2005, and the Peabody Developmental Motor Scales and the Kaufman Speech Praxis Test for Children were administered. The presence of abnormal oral-motor imitation skills as measured by the Kaufman Speech Praxis Test was associated with below-average fine motor performance. This finding has important implications for evaluation and treatment of preschool children with severe speech-sound disorders.

  6. Children with Comorbid Speech Sound Disorder and Specific Language Impairment Are at Increased Risk for Attention-Deficit/Hyperactivity Disorder

    Science.gov (United States)

    McGrath, Lauren M.; Hutaff-Lee, Christa; Scott, Ashley; Boada, Richard; Shriberg, Lawrence D.; Pennington, Bruce F.

    2008-01-01

    This study focuses on the comorbidity between attention-deficit/hyperactivity disorder (ADHD) symptoms and speech sound disorder (SSD). SSD is a developmental disorder characterized by speech production errors that impact intelligibility. Previous research addressing this comorbidity has typically used heterogeneous groups of speech-language…

  7. Introduction. The perception of speech: from sound to meaning.

    Science.gov (United States)

    Moore, Brian C J; Tyler, Lorraine K; Marslen-Wilson, William

    2008-03-12

    Spoken language communication is arguably the most important activity that distinguishes humans from non-human species. This paper provides an overview of the review papers that make up this theme issue on the processes underlying speech communication. The volume includes contributions from researchers who specialize in a wide range of topics within the general area of speech perception and language processing. It also includes contributions from key researchers in neuroanatomy and functional neuro-imaging, in an effort to cut across traditional disciplinary boundaries and foster cross-disciplinary interactions in this important and rapidly developing area of the biological and cognitive sciences.

  8. Evolution of non-speech sound memory in postlingual deafness: implications for cochlear implant rehabilitation.

    Science.gov (United States)

    Lazard, D S; Giraud, A L; Truy, E; Lee, H J

    2011-07-01

    Neurofunctional patterns assessed before or after cochlear implantation (CI) are informative markers of implantation outcome. Because phonological memory reorganization in post-lingual deafness is predictive of the outcome, we investigated, using a cross-sectional approach, whether memory of non-speech sounds (NSS) produced by animals or objects (i.e. non-human sounds) is also reorganized, and how this relates to speech perception after CI. We used an fMRI auditory imagery task in which sounds were evoked by pictures of noisy items for post-lingual deaf candidates for CI and for normal-hearing subjects. When deaf subjects imagined sounds, the left inferior frontal gyrus, the right posterior temporal gyrus and the right amygdala were less activated compared to controls. Activity levels in these regions decreased with duration of auditory deprivation, indicating declining NSS representations. Whole brain correlations with duration of auditory deprivation and with speech scores after CI showed an activity decline in dorsal, fronto-parietal, cortical regions, and an activity increase in ventral cortical regions, the right anterior temporal pole and the hippocampal gyrus. Both dorsal and ventral reorganizations predicted poor speech perception outcome after CI. These results suggest that post-CI speech perception relies, at least partially, on the integrity of a neural system used for processing NSS that is based on audio-visual and articulatory mapping processes. When this neural system is reorganized, post-lingual deaf subjects resort to inefficient semantic- and memory-based strategies. These results complement those of other studies on speech processing, suggesting that both speech and NSS representations need to be maintained during deafness to ensure the success of CI. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common.

    Science.gov (United States)

    Weninger, Felix; Eyben, Florian; Schuller, Björn W; Mortillaro, Marcello; Scherer, Klaus R

    2013-01-01

    WITHOUT DOUBT, THERE IS EMOTIONAL INFORMATION IN ALMOST ANY KIND OF SOUND RECEIVED BY HUMANS EVERY DAY: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow's pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of "the sound that something makes," in order to evaluate the system's auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects.

  10. Predicting the perceived sound quality of frequency-compressed speech.

    Directory of Open Access Journals (Sweden)

    Rainer Huber

    Full Text Available The performance of objective speech and audio quality measures for the prediction of the perceived quality of frequency-compressed speech in hearing aids is investigated in this paper. A number of existing quality measures have been applied to speech signals processed by a hearing aid, which compresses speech spectra along frequency in order to make information contained in higher frequencies audible for listeners with severe high-frequency hearing loss. Quality measures were compared with subjective ratings obtained from normal hearing and hearing impaired children and adults in an earlier study. High correlations were achieved with quality measures computed by quality models that are based on the auditory model of Dau et al., namely, the measure PSM, computed by the quality model PEMO-Q; the measure qc, computed by the quality model proposed by Hansen and Kollmeier; and the linear subcomponent of the HASQI. For the prediction of quality ratings by hearing impaired listeners, extensions of some models incorporating hearing loss were implemented and shown to achieve improved prediction accuracy. Results indicate that these objective quality measures can potentially serve as tools for assisting in initial setting of frequency compression parameters.

  11. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  12. Data-driven subclassification of speech sound disorders in preschool children.

    Science.gov (United States)

    Vick, Jennell C; Campbell, Thomas F; Shriberg, Lawrence D; Green, Jordan R; Truemper, Klaus; Rusiewicz, Heather Leavy; Moore, Christopher A

    2014-12-01

    The purpose of the study was to determine whether distinct subgroups of preschool children with speech sound disorders (SSD) could be identified using a subgroup discovery algorithm (SUBgroup discovery via Alternate Random Processes, or SUBARP). Of specific interest was finding evidence of a subgroup of SSD exhibiting performance consistent with atypical speech motor control. Ninety-seven preschool children with SSD completed speech and nonspeech tasks. Fifty-three kinematic, acoustic, and behavioral measures from these tasks were input to SUBARP. Two distinct subgroups were identified from the larger sample. The 1st subgroup (76%; population prevalence estimate = 67.8%-84.8%) did not have characteristics that would suggest atypical speech motor control. The 2nd subgroup (10.3%; population prevalence estimate = 4.3%-16.5%) exhibited significantly higher variability in measures of articulatory kinematics and poor ability to imitate iambic lexical stress, suggesting atypical speech motor control. Both subgroups were consistent with classes of SSD in the Speech Disorders Classification System (SDCS; Shriberg et al., 2010a). Characteristics of children in the larger subgroup were consistent with the proportionally large SDCS class termed speech delay; characteristics of children in the smaller subgroup were consistent with the SDCS subtype termed motor speech disorder-not otherwise specified. The authors identified candidate measures to identify children in each of these groups.

  13. Inferior Frontal Cortex Contributions to the Recognition of Spoken Words and Their Constituent Speech Sounds.

    Science.gov (United States)

    Rogers, Jack C; Davis, Matthew H

    2017-05-01

    Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discussed.

  14. Early Intervening for Students with Speech Sound Disorders: Lessons from a School District

    Science.gov (United States)

    Mire, Stephen P.; Montgomery, Judy K.

    2009-01-01

    The concept of early intervening services was introduced into public school systems with the implementation of the Individuals With Disabilities Education Improvement Act (IDEA) of 2004. This article describes a program developed for students with speech sound disorders that incorporated concepts of early intervening services, response to…

  15. What Factors Place Children with Speech Sound Disorders at Risk for Reading Problems?

    Science.gov (United States)

    Anthony, Jason L.; Aghara, Rachel Greenblatt; Dunkelberger, Martha J.; Anthony, Teresa I.; Williams, Jeffrey M.; Zhang, Zhou

    2011-01-01

    Purpose: To identify weaknesses in print awareness and phonological processing that place children with speech sound disorders (SSDs) at increased risk for reading difficulties. Method: Language, literacy, and phonological skills of 3 groups of preschool-age children were compared: a group of 68 children with SSDs, a group of 68 peers with normal…

  16. Nonspeech Oral Motor Treatment Issues Related to Children with Developmental Speech Sound Disorders

    Science.gov (United States)

    Ruscello, Dennis M.

    2008-01-01

    Purpose: This article examines nonspeech oral motor treatments (NSOMTs) in the population of clients with developmental speech sound disorders. NSOMTs are a collection of nonspeech methods and procedures that claim to influence tongue, lip, and jaw resting postures; increase strength; improve muscle tone; facilitate range of motion; and develop…

  17. Children with Speech Sound Disorders at School: Challenges for Children, Parents and Teachers

    Science.gov (United States)

    Daniel, Graham R.; McLeod, Sharynne

    2017-01-01

    Teachers play a major role in supporting children's educational, social, and emotional development although may be unprepared for supporting children with speech sound disorders. Interviews with 34 participants including six focus children, their parents, siblings, friends, teachers and other significant adults in their lives highlighted…

  18. Narrative Ability of Children with Speech Sound Disorders and the Prediction of Later Literacy Skills

    Science.gov (United States)

    Wellman, Rachel L.; Lewis, Barbara A.; Freebairn, Lisa A.; Avrich, Allison A.; Hansen, Amy J.; Stein, Catherine M.

    2011-01-01

    Purpose: The main purpose of this study was to examine how children with isolated speech sound disorders (SSDs; n = 20), children with combined SSDs and language impairment (LI; n = 20), and typically developing children (n = 20), ages 3;3 (years;months) to 6;6, differ in narrative ability. The second purpose was to determine if early narrative…

  19. Transitioning from Analog to Digital Audio Recording in Childhood Speech Sound Disorders

    Science.gov (United States)

    Shriberg, Lawrence D.; Mcsweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.

    2005-01-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing…

  20. The Use of Nonspeech Oral Motor Treatments for Developmental Speech Sound Production Disorders: Interventions and Interactions

    Science.gov (United States)

    Powell, Thomas W.

    2008-01-01

    Purpose: The use of nonspeech oral motor treatments (NSOMTs) in the management of pediatric speech sound production disorders is controversial. This article serves as a prologue to a clinical forum that examines this topic in depth. Method: Theoretical, historical, and ethical issues are reviewed to create a series of clinical questions that…

  1. Differentiating Speech Sound Disorders from Phonological Dialect Differences: Implications for Assessment and Intervention

    Science.gov (United States)

    Velleman, Shelley L.; Pearson, Barbara Zurer

    2010-01-01

    B. Z. Pearson, S. L. Velleman, T. J. Bryant, and T. Charko (2009) demonstrated phonological differences in typically developing children learning African American English as their first dialect vs. General American English only. Extending this research to children with speech sound disorders (SSD) has key implications for intervention. A total of…

  2. Speech-Sound Disorders and Attention-Deficit/Hyperactivity Disorder Symptoms

    Science.gov (United States)

    Lewis, Barbara A.; Short, Elizabeth J.; Iyengar, Sudha K.; Taylor, H. Gerry; Freebairn, Lisa; Tag, Jessica; Avrich, Allison A.; Stein, Catherine M.

    2012-01-01

    Purpose: The purpose of this study was to examine the association of speech-sound disorders (SSD) with symptoms of attention-deficit/hyperactivity disorder (ADHD) by the severity of the SSD and the mode of transmission of SSD within the pedigrees of children with SSD. Participants and Methods: The participants were 412 children who were enrolled…

  3. Teachers' Perceptions of Students with Speech Sound Disorders: A Quantitative and Qualitative Analysis

    Science.gov (United States)

    Overby, Megan; Carrell, Thomas; Bernthal, John

    2007-01-01

    Purpose: This study examined 2nd-grade teachers' perceptions of the academic, social, and behavioral competence of students with speech sound disorders (SSDs). Method: Forty-eight 2nd-grade teachers listened to 2 groups of sentences differing by intelligibility and pitch but spoken by a single 2nd grader. For each sentence group, teachers rated…

  4. A Longitudinal Investigation of Morpho-Syntax in Children with Speech Sound Disorders

    Science.gov (United States)

    Mortimer, Jennifer; Rvachew, Susan

    2010-01-01

    Purpose: The intent of this study was to examine the longitudinal morpho-syntactic progression of children with Speech Sound Disorders (SSD) grouped according to Mean Length of Utterance (MLU) scores. Methods: Thirty-seven children separated into four clusters were assessed in their pre-kindergarten and Grade 1 years. Cluster 1 were children with…

  5. A Survey of University Professors Teaching Speech Sound Disorders: Nonspeech Oral Motor Exercises and Other Topics

    Science.gov (United States)

    Watson, Maggie M.; Lof, Gregory L.

    2009-01-01

    Purpose: The purpose of this article was to obtain and organize information from instructors who teach course work on the subject of children's speech sound disorders (SSD) regarding their use of teaching resources, involvement in students' clinical practica, and intervention approaches presented to students. Instructors also reported if they…

  6. Literacy Outcomes of Children with Early Childhood Speech Sound Disorders: Impact of Endophenotypes

    Science.gov (United States)

    Lewis, Barbara A.; Avrich, Allison A.; Freebairn, Lisa A.; Hansen, Amy J.; Sucheston, Lara E.; Kuo, Iris; Taylor, H. Gerry; Iyengar, Sudha K.; Stein, Catherine M.

    2011-01-01

    Purpose: To demonstrate that early childhood speech sound disorders (SSD) and later school-age reading, written expression, and spelling skills are influenced by shared endophenotypes that may be in part genetic. Method: Children with SSD and their siblings were assessed at early childhood (ages 4-6 years) and followed at school age (7-12 years).…

  7. Reading Skills of Students with Speech Sound Disorders at Three Stages of Literacy Development

    Science.gov (United States)

    Skebo, Crysten M.; Lewis, Barbara A.; Freebairn, Lisa A.; Tag, Jessica; Ciesla, Allison Avrich; Stein, Catherine M.

    2013-01-01

    Purpose: The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with…

  8. Commentary on "Treatment Decisions for Children with Speech-Sound Disorders": Revisiting the Past in EBP

    Science.gov (United States)

    Tyler, Ann A.

    2006-01-01

    Purpose: This commentary, written in response to Alan Kamhi's paper, "Treatment Decisions for Children with Speech-Sound Disorders," further considers the "what" or goal selection process of decision making with the aim of efficiency--getting the most change in the shortest time. Method: My comments reflect a focus on the client values piece of…

  9. Letter-speech sound learning in children with dyslexia : From behavioral research to clinical practice

    NARCIS (Netherlands)

    Aravena, S.

    2017-01-01

    In alphabetic languages, learning to associate speech-sounds with unfamiliar characters is a critical step in becoming a proficient reader. This dissertation aimed at expanding our knowledge of this learning process and its relation to dyslexia, with an emphasis on bridging the gap between

  10. Tutorial and Guidelines on Measurement of Sound Pressure Level in Voice and Speech

    Science.gov (United States)

    Švec, Jan G.; Granqvist, Svante

    2018-01-01

    Purpose: Sound pressure level (SPL) measurement of voice and speech is often considered a trivial matter, but the measured levels are often reported incorrectly or incompletely, making them difficult to compare among various studies. This article aims at explaining the fundamental principles behind these measurements and providing guidelines to…

  11. Improved speech reception and sound quality with the DUET2 audio processor for electric acoustic stimulation.

    Science.gov (United States)

    Kleine Punte, Andrea; Mertens, Griet; Cochet, Ellen; De Bodt, Marc; Van de Heyning, Paul

    2015-01-01

    The results show that the DUET2 offers users speech perception that is equivalent to or better than the DUET. Moreover, the DUET2 offers subjective benefits above those provided by the DUET. The DUET is a combination of hearing aid and CI in one device for electric acoustic stimulation. Since its introduction: a second generation, the DUET2, has been developed. This study aimed to investigate the benefits of the DUET2 compared to the DUET. Speech reception was determined in quiet and in noise. The sound quality of speech and music was rated using a visual analogue scale. Test intervals were at upgrade and at 3 and 6 months after upgrade. Speech reception in quiet and in noise was significantly better than with the DUET after 6 months. For sentence reception in quiet, the SRT with the DUET2 did not change significantly between test intervals. Sentence reception in noise with the DUET2 improved significantly between 3 and 6 months and upgrade and 6 months. After 6 months, speech reception in quiet and in noise with the DUET2 was significantly better than with the DUET. Subjects rated the sound quality of speech and of music with the DUET2 significantly better than with the DUET.

  12. Pronunciation analysis for children with speech sound disorders.

    Science.gov (United States)

    Dudy, Shiran; Asgari, Meysam; Kain, Alexander

    2015-08-01

    Phonological disorders affect 10% of preschool and school-age children, adversely affecting their communication, academic performance, and interaction level. Effective pronunciation training requires prolonged supervised practice and interaction. Unfortunately, many children do not have access or only limited access to a speech-language pathologist. Computer-assisted pronunciation training has the potential for being a highly effective teaching aid; however, to-date such systems remain incapable of identifying pronunciation errors with sufficient accuracy. In this paper, we propose to improve accuracy by (1) learning acoustic models from a large children's speech database, (2) using an explicit model of typical pronunciation errors of children in the target age range, and (3) explicit modeling of the acoustics of distorted phonemes.

  13. Aging affects hemispheric asymmetry in the neural representation of speech sounds.

    Science.gov (United States)

    Bellis, T J; Nicol, T; Kraus, N

    2000-01-15

    Hemispheric asymmetries in the processing of elemental speech sounds appear to be critical for normal speech perception. This study investigated the effects of age on hemispheric asymmetry observed in the neurophysiological responses to speech stimuli in three groups of normal hearing, right-handed subjects: children (ages, 8-11 years), young adults (ages, 20-25 years), and older adults (ages > 55 years). Peak-to-peak response amplitudes of the auditory cortical P1-N1 complex obtained over right and left temporal lobes were examined to determine the degree of left/right asymmetry in the neurophysiological responses elicited by synthetic speech syllables in each of the three subject groups. In addition, mismatch negativity (MMN) responses, which are elicited by acoustic change, were obtained. Whereas children and young adults demonstrated larger P1-N1-evoked response amplitudes over the left temporal lobe than over the right, responses from elderly subjects were symmetrical. In contrast, MMN responses, which reflect an echoic memory process, were symmetrical in all subject groups. The differences observed in the neurophysiological responses were accompanied by a finding of significantly poorer ability to discriminate speech syllables involving rapid spectrotemporal changes in the older adult group. This study demonstrates a biological, age-related change in the neural representation of basic speech sounds and suggests one possible underlying mechanism for the speech perception difficulties exhibited by aging adults. Furthermore, results of this study support previous findings suggesting a dissociation between neural mechanisms underlying those processes that reflect the basic representation of sound structure and those that represent auditory echoic memory and stimulus change.

  14. Personality, category, and cross-linguistic speech sound processing: a connectivistic view.

    Science.gov (United States)

    Lan, Yizhou; Li, Will X Y

    2014-01-01

    Category formation of human perception is a vital part of cognitive ability. The disciplines of neuroscience and linguistics, however, seldom mention it in the marrying of the two. The present study reviews the neurological view of language acquisition as normalization of incoming speech signal, and attempts to suggest how speech sound category formation may connect personality with second language speech perception. Through a questionnaire, (being thick or thin) ego boundary, a correlate found to be related to category formation, was proven a positive indicator of personality types. Following the qualitative study, thick boundary and thin boundary English learners native in Cantonese were given a speech-signal perception test using an ABX discrimination task protocol. Results showed that thick-boundary learners performed significantly lower in accuracy rate than thin-boundary learners. It was implied that differences in personality do have an impact on language learning.

  15. Differential Diagnosis of Speech Sound Disorders in Danish-speaking Children

    DEFF Research Database (Denmark)

    Clausen, Marit Carolin; Fox-Boyer, Anette

    , the aim of the study was to investigate the speech of children with SSD by means of accuracy of phoneme production as well as types of the phonological processes in a Danish-speaking population. Further, the applicability of the two different classification approaches was investigated. A total of 211......Children with speech sound disorders (SSD) are a heterogeneous group in terms of severity, underlying causes, speech characteristics and response to intervention. The correct identification and remediation of SSD is of particular importance since children with persisting SSD are placed at risk...... socially, academically and vocationally (McCormack et al., 2009). Thus, speech analysis should accurately detect whether a child has SSD or not, and should furthermore provide information about the type of disorder present. The classification into distinct subgroups of SSD should support clinicians...

  16. When Does Speech Sound Disorder Matter for Literacy? The Role of Disordered Speech Errors, Co-Occurring Language Impairment and Family Risk of Dyslexia

    Science.gov (United States)

    Hayiou-Thomas, Marianna E.; Carroll, Julia M.; Leavett, Ruth; Hulme, Charles; Snowling, Margaret J.

    2017-01-01

    Background: This study considers the role of early speech difficulties in literacy development, in the context of additional risk factors. Method: Children were identified with speech sound disorder (SSD) at the age of 3½ years, on the basis of performance on the Diagnostic Evaluation of Articulation and Phonology. Their literacy skills were…

  17. Speech abilities in preschool children with speech sound disorder with and without co-occurring language impairment.

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A

    2014-10-01

    The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different tests of articulation/phonology, percent consonants correct, and the number of omission, substitution, distortion, typical, and atypical error patterns used in the production of different wordlists that had similar levels of phonetic and structural complexity. In comparison with children with SSD only, children with SSD and LI used similar numbers but different types of errors, including more omission patterns ( p < .001, d = 1.55) and fewer distortion patterns ( p = .022, d = 1.03). There were no significant differences in substitution, typical, and atypical error pattern use. Frequent omission error pattern use may reflect a more compromised linguistic system characterized by absent phonological representations for target sounds (see Shriberg et al., 2005). Research is required to examine the diagnostic potential of early frequent omission error pattern use in predicting later diagnoses of co-occurring SSD and LI and/or reading problems.

  18. Comparing Traditional Service Delivery and Telepractice for Speech Sound Production Using a Functional Outcome Measure.

    Science.gov (United States)

    Coufal, Kathy; Parham, Douglas; Jakubowitz, Melissa; Howell, Cassandra; Reyes, Jared

    2018-02-06

    Using American Speech-Language-Hearing Association's (ASHA's) National Outcomes Measurement System (NOMS) Functional Communication Measure (FCM) as a common metric, this investigation compared traditional service delivery and telepractice service delivery for children receiving therapy for the NOMS diagnostic category of "speech sound production." De-identified cases were secured from ASHA's NOMS database and a proprietary database from a private e-learning provider. Cases were included if they met 3 criteria: (a) children received treatment exclusively for speech sound production, (b) they were between 6.0 and 9.5 years old, and (c) they received therapy lasting between 4 and 9 months. A total of 1,331 ASHA NOMS cases and 428 telepractice cases were included. The 2 groups were matched by initial FCM scores. Mann-Whitney U tests were completed to compare differences in the median change scores (the difference between the initial and final FCM scores) between the 2 groups. There were no significant differences in the median change scores between the traditional group and the telepractice group. These results suggest comparable treatment outcomes between traditional service delivery and telepractice for treatment of children exhibiting speech sound disorders. The findings provide support for the use of telepractice for school-age children.

  19. Adolescent outcomes of children with early speech sound disorders with and without language impairment.

    Science.gov (United States)

    Lewis, Barbara A; Freebairn, Lisa; Tag, Jessica; Ciesla, Allison A; Iyengar, Sudha K; Stein, Catherine M; Taylor, H Gerry

    2015-05-01

    In this study, the authors determined adolescent speech, language, and literacy outcomes of individuals with histories of early childhood speech sound disorders (SSD) with and without comorbid language impairment (LI) and examined factors associated with these outcomes. This study used a prospective longitudinal design. Participants with SSD (n = 170), enrolled at early childhood (4-6 years) were followed at adolescence (11-18 years) and were compared to individuals with no histories of speech or language impairment (no SSD; n = 146) on measures of speech, language, and literacy. Comparisons were made between adolescents with early childhood histories of no SSD, SSD only, and SSD plus LI as well as between adolescents with no SSD, resolved SSD, and persistent SSD. Individuals with early childhood SSD with comorbid LI had poorer outcomes than those with histories of SSD only or no SSD. Poorer language and literacy outcomes in adolescence were associated with multiple factors, including persistent speech sound problems, lower nonverbal intelligence, and lower socioeconomic status. Adolescents with persistent SSD had higher rates of comorbid LI and reading disability than the no SSD and resolved SSD groups. Risk factors for language and literacy problems in adolescence include an early history of LI, persistent SSD, lower nonverbal cognitive ability, and social disadvantage.

  20. Speech-language pathologists' practices regarding assessment, analysis, target selection, intervention, and service delivery for children with speech sound disorders.

    Science.gov (United States)

    Mcleod, Sharynne; Baker, Elise

    2014-01-01

    A survey of 231 Australian speech-language pathologists (SLPs) was undertaken to describe practices regarding assessment, analysis, target selection, intervention, and service delivery for children with speech sound disorders (SSD). The participants typically worked in private practice, education, or community health settings and 67.6% had a waiting list for services. For each child, most of the SLPs spent 10-40 min in pre-assessment activities, 30-60 min undertaking face-to-face assessments, and 30-60 min completing paperwork after assessments. During an assessment SLPs typically conducted a parent interview, single-word speech sampling, collected a connected speech sample, and used informal tests. They also determined children's stimulability and estimated intelligibility. With multilingual children, informal assessment procedures and English-only tests were commonly used and SLPs relied on family members or interpreters to assist. Common analysis techniques included determination of phonological processes, substitutions-omissions-distortions-additions (SODA), and phonetic inventory. Participants placed high priority on selecting target sounds that were stimulable, early developing, and in error across all word positions and 60.3% felt very confident or confident selecting an appropriate intervention approach. Eight intervention approaches were frequently used: auditory discrimination, minimal pairs, cued articulation, phonological awareness, traditional articulation therapy, auditory bombardment, Nuffield Centre Dyspraxia Programme, and core vocabulary. Children typically received individual therapy with an SLP in a clinic setting. Parents often observed and participated in sessions and SLPs typically included siblings and grandparents in intervention sessions. Parent training and home programs were more frequently used than the group therapy. Two-thirds kept up-to-date by reading journal articles monthly or every 6 months. There were many similarities with

  1. Investigating the relationship between phonological awareness and phonological processes in children with speech sound disorders

    Directory of Open Access Journals (Sweden)

    Navideh Shakeri

    2014-12-01

    Full Text Available Background and Aim: Some children with speech sound disorder (SSD have difficulty with phonological awareness skills; therefore, the purpose of this study was to survey the correlation between phonological processes and phonological awareness.Methods: Twenty-one children with speech sound disorder, aged between 5 and 6, participated in this cross-sectional study. They were recruited from speech therapy clinics at the Tehran University of Medical Sciences. They were selected using the convenience sampling method . Language, speech sound, and phonological awareness skills were investigated by the test of language development-third edition (TOLD-3, the Persian diagnostic evaluation articulation and phonology test, and the phonological awareness test. Both Pearson’s and Spearman’s correlations were used to analyze the data.Results: There was a significant correlation between the atypical phonological processes and alliteration awareness (p=0.005, rhyme awareness (p=0.009, blending phonemes (p=0.006, identification of words with the same initial phoneme (p=0.007, and identification of words with the same final phoneme (p=0.007. Analyzing the correlation on the basis of the phoneme and syllable structure separately showed there was a significant correlation between the atypical phoneme structure and alliteration awareness (p=0.001, rhyme awareness (p=0.008, blending phonemes (p=0.029, identification of words with the same initial phoneme (p=0.007, and identification of words with the same final phoneme (p=0.003.Conclusion: Results revealed a relationship between phonological processes and phonological awareness in children with speech sound disorder. Poor phonological awareness was associated with atypical phonological processes especially at the phoneme level.

  2. Speech sound disorders or differences: Insights from bilingual children speaking two Chinese languages.

    Science.gov (United States)

    Lam, Kitty K Y; To, Carol K S

    2017-11-01

    The study investigated how Putonghua-Cantonese bilingual children differ from monolinguals in their acquisition of speech sound and phonological patterns. Fifty-four typically developing Putonghua-Cantonese bilingual children aged 3;6-6;0 were recruited from nurseries in the North District of Hong Kong. The Hong Kong Cantonese Articulation Test (Cheung et al., 2006) and a Putonghua picture-naming task (Zhu & Dodd, 2000) were used to elicit single-word samples of both languages. Acquisition of speech sound and phonological patterns exhibited by ≥20% of the children in an age group were compared to the normative data on children who were Cantonese native or Putonghua monolingual speakers. The bilingual children demonstrated smaller sound inventory in both languages and more delayed and atypical phonological processes. The atypical patterns could be explained by phonological interference between Putonghua and Cantonese. The findings serve as a preliminary reference for clinicians in differentiating language difference from true speech sound disorders in Putonghua-Cantonese bilingual children in Hong Kong. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Speech Coding in the Brain: Representation of Vowel Formants by Midbrain Neurons Tuned to Sound Fluctuations

    Science.gov (United States)

    Carney, Laurel H; Li, Tianhao; McDonough, Joyce M

    2015-01-01

    Current models for neural coding of vowels are typically based on linear descriptions of the auditory periphery, and fail at high sound levels and in background noise. These models rely on either auditory nerve discharge rates or phase locking to temporal fine structure. However, both discharge rates and phase locking saturate at moderate to high sound levels, and phase locking is degraded in the CNS at middle to high frequencies. The fact that speech intelligibility is robust over a wide range of sound levels is problematic for codes that deteriorate as the sound level increases. Additionally, a successful neural code must function for speech in background noise at levels that are tolerated by listeners. The model presented here resolves these problems, and incorporates several key response properties of the nonlinear auditory periphery, including saturation, synchrony capture, and phase locking to both fine structure and envelope temporal features. The model also includes the properties of the auditory midbrain, where discharge rates are tuned to amplitude fluctuation rates. The nonlinear peripheral response features create contrasts in the amplitudes of low-frequency neural rate fluctuations across the population. These patterns of fluctuations result in a response profile in the midbrain that encodes vowel formants over a wide range of levels and in background noise. The hypothesized code is supported by electrophysiological recordings from the inferior colliculus of awake rabbits. This model provides information for understanding the structure of cross-linguistic vowel spaces, and suggests strategies for automatic formant detection and speech enhancement for listeners with hearing loss.

  4. Early-latency categorical speech sound representations in the left inferior frontal gyrus.

    Science.gov (United States)

    Alho, Jussi; Green, Brannon M; May, Patrick J C; Sams, Mikko; Tiitinen, Hannu; Rauschecker, Josef P; Jääskeläinen, Iiro P

    2016-04-01

    Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor.

    Science.gov (United States)

    Dillier, Norbert; Lai, Wai Kong

    2015-06-11

    The Nucleus(®) 5 System Sound Processor (CP810, Cochlear™, Macquarie University, NSW, Australia) contains two omnidirectional microphones. They can be configured as a fixed directional microphone combination (called Zoom) or as an adaptive beamformer (called Beam), which adjusts the directivity continuously to maximally reduce the interfering noise. Initial evaluation studies with the CP810 had compared performance and usability of the new processor in comparison with the Freedom™ Sound Processor (Cochlear™) for speech in quiet and noise for a subset of the processing options. This study compares the two processing options suggested to be used in noisy environments, Zoom and Beam, for various sound field conditions using a standardized speech in noise matrix test (Oldenburg sentences test). Nine German-speaking subjects who previously had been using the Freedom speech processor and subsequently were upgraded to the CP810 device participated in this series of additional evaluation tests. The speech reception threshold (SRT for 50% speech intelligibility in noise) was determined using sentences presented via loudspeaker at 65 dB SPL in front of the listener and noise presented either via the same loudspeaker (S0N0) or at 90 degrees at either the ear with the sound processor (S0NCI+) or the opposite unaided ear (S0NCI-). The fourth noise condition consisted of three uncorrelated noise sources placed at 90, 180 and 270 degrees. The noise level was adjusted through an adaptive procedure to yield a signal to noise ratio where 50% of the words in the sentences were correctly understood. In spatially separated speech and noise conditions both Zoom and Beam could improve the SRT significantly. For single noise sources, either ipsilateral or contralateral to the cochlear implant sound processor, average improvements with Beam of 12.9 and 7.9 dB in SRT were found. The average SRT of -8 dB for Beam in the diffuse noise condition (uncorrelated noise from both sides and

  6. Acoustic and auditory phonetics: the adaptive design of speech sound systems.

    Science.gov (United States)

    Diehl, Randy L

    2008-03-12

    Speech perception is remarkably robust. This paper examines how acoustic and auditory properties of vowels and consonants help to ensure intelligibility. First, the source-filter theory of speech production is briefly described, and the relationship between vocal-tract properties and formant patterns is demonstrated for some commonly occurring vowels. Next, two accounts of the structure of preferred sound inventories, quantal theory and dispersion theory, are described and some of their limitations are noted. Finally, it is suggested that certain aspects of quantal and dispersion theories can be unified in a principled way so as to achieve reasonable predictive accuracy.

  7. The masking effect in foreign speech sounds perception revealed by neuromagnetic responses.

    Science.gov (United States)

    Koyama, S; Gunji, A; Yabe, H; Yamada, R A; Oiwa, S; Kubo, R; Kakigi, R

    2000-11-27

    The backward masking effect on non-native consonants by a following vowel was examined using neuromagnetic responses to synthesized speech sounds. Native speakers of Japanese were presented with sequences of frequent (85%) and infrequent (15%) speech sounds (/ra/ and /la/ respectively, no /l/ /r/ contrast in Japanese language). The duration of the stimuli was 110 ms in a short session and 150 ms in a long session. In the short session, the stimuli were terminated in the course of the transition from the consonant to the vowel to diminish the masking effect from the vowel part. A distinct magnetic counterpart of mismatch negativity (MMNm) was observed for the short session, whereas a smaller MMNm was observed for the long session.

  8. Atypical central auditory speech-sound discrimination in children who stutter as indexed by the mismatch negativity

    NARCIS (Netherlands)

    Jansson-Verkasalo, E.; Eggers, K.; Järvenpää, A.; Suominen, K.; Van Den Bergh, B.R.H.; de Nil, L.; Kujala, T.

    2014-01-01

    Purpose Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are

  9. Acoustic cues in the perception of second language speech sounds

    Science.gov (United States)

    Bogacka, Anna A.

    2004-05-01

    The experiment examined to what acoustic cues Polish learners of English pay attention when distinguishing between English high vowels. Predictions concerned the influence of Polish vowel system (no duration differences and only one vowel in the high back vowel region), salience of duration cues and L1 orthography. Thirty-seven Polish subjects and a control group of English native speakers identified stimuli from heed-hid and who'd-hood continua varying in spectral and duration steps. Identification scores by spectral and duration steps, and F1/F2 plots of identifications, were given as well as fundamental frequency variation comments. English subjects strongly relied on spectral cues (typical categorical perception) and almost did not react to temporal cues. Polish subjects relied strongly on temporal cues for both continua, but showed a reversed pattern of identification of who'd-hood contrast. Their reliance on spectral cues was weak and had a reversed pattern for heed-hid contrast. The results were interpreted with reference to the speech learning model [Flege (1995)], perceptual assimilation model [Best (1995)] and ontogeny phylogeny model [Major (2001)].

  10. Speech-language pathologists' assessment practices for children with suspected speech sound disorders: results of a national survey.

    Science.gov (United States)

    Skahan, Sarah M; Watson, Maggie; Lof, Gregory L

    2007-08-01

    This study examined assessment procedures used by speech-language pathologists (SLPs) when assessing children suspected of having speech sound disorders (SSD). This national survey also determined the information participants obtained from clients' speech samples, evaluation of non-native English speakers, and time spent on assessment. One thousand surveys were mailed to a randomly selected group of SLPs, self-identified as having worked with children with SSD. A total of 333 (33%) surveys were returned. The assessment tasks most frequently used included administering a commercial test, estimating intelligibility, assessing stimulability, and conducting a hearing screening. The amount of time dedicated to assessment activities (e.g., administering formal tests, contacting parents) varied across participants and was significantly related to years of experience but not caseload size. Most participants reported using informal assessment procedures, or English-only standardized tests, when evaluating non-native English speakers. Most participants provided assessments that met federal guidelines to qualify children for special education services; however, additional assessment may be needed to create comprehensive treatment plans for their clients. These results provide a unique perspective on the assessment of children suspected of having SSD and should be helpful to SLPs as they examine their own assessment practices.

  11. Listening to an Audio Drama Activates Two Processing Networks, One for All Sounds, Another Exclusively for Speech

    Science.gov (United States)

    Boldt, Robert; Malinen, Sanna; Seppä, Mika; Tikka, Pia; Savolainen, Petri; Hari, Riitta; Carlson, Synnöve

    2013-01-01

    Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI). An intersubject-correlation (ISC) map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC) analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two–covering non-overlapping areas of the auditory cortex–were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds. PMID:23734202

  12. Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech.

    Directory of Open Access Journals (Sweden)

    Robert Boldt

    Full Text Available Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI. An intersubject-correlation (ISC map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.

  13. Hemispheric lateralization in an analysis of speech sounds. Left hemisphere dominance replicated in Japanese subjects.

    Science.gov (United States)

    Koyama, S; Gunji, A; Yabe, H; Oiwa, S; Akahane-Yamada, R; Kakigi, R; Näätänen, R

    2000-09-01

    Evoked magnetic responses to speech sounds [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M. Vainio, P. Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.] were recorded from 13 Japanese subjects (right-handed). Infrequently presented vowels ([o]) among repetitive vowels ([e]) elicited the magnetic counterpart of mismatch negativity, MMNm (Bilateral, nine subjects; Left hemisphere alone, three subjects; Right hemisphere alone, one subject). The estimated source of the MMNm was stronger in the left than in the right auditory cortex. The sources were located posteriorly in the left than in the right auditory cortex. These findings are consistent with the results obtained in Finnish [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M.Vainio, P.Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.][T. Rinne, K. Alho, P. Alku, M. Holi, J. Sinkkonen, J. Virtanen, O. Bertrand and R. Näätänen, Analysis of speech sounds is left-hemisphere predominant at 100-150 ms after sound onset. Neuroreport, 10 (1999) 1113-1117.] and English [K. Alho, J.F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko and R.J. Ilmoniemi, Hemispheric lateralization in preattentive processing of speech sounds. Neurosci. Lett., 258 (1998) 9-12.] subjects. Instead of the P1m observed in Finnish [M. Tervaniemi, A. Kujala, K. Alho, J. Virtanen, R.J. Ilmoniemi and R. Näätänen, Functional specialization of the human auditory cortex in processing phonetic and musical sounds: A magnetoencephalographic (MEG) study. Neuroimage, 9 (1999) 330-336.] and English [K. Alho, J. F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko

  14. Oral and Hand Movement Speeds Are Associated with Expressive Language Ability in Children with Speech Sound Disorder

    Science.gov (United States)

    Peter, Beate

    2012-01-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…

  15. How Should Children with Speech Sound Disorders be Classified? A Review and Critical Evaluation of Current Classification Systems

    Science.gov (United States)

    Waring, R.; Knight, R.

    2013-01-01

    Background: Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of…

  16. Predicting Individual Differences in Reading and Spelling Skill With Artificial Script-Based Letter-Speech Sound Training.

    Science.gov (United States)

    Aravena, Sebastián; Tijms, Jurgen; Snellings, Patrick; van der Molen, Maurits W

    2017-06-01

    In this study, we examined the learning of letter-speech sound correspondences within an artificial script and performed an experimental analysis of letter-speech sound learning among dyslexic and normal readers vis-à-vis phonological awareness, rapid automatized naming, reading, and spelling. Participants were provided with 20 min of training aimed at learning eight new basic letter-speech sound correspondences, followed by a short assessment of mastery of the correspondences and word-reading ability in this unfamiliar script. Our results demonstrated that brief training is moderately successful in differentiating dyslexic readers from normal readers in their ability to learn letter-speech sound correspondences. The normal readers outperformed the dyslexic readers for accuracy and speed on a letter-speech sound matching task, as well as on a word-reading task containing familiar words written in the artificial orthography. Importantly, the new artificial script-related measures were related to phonological awareness and rapid automatized naming and made a unique contribution in predicting individual differences in reading and spelling ability. Our results are consistent with the view that a fundamental letter-speech sound learning deficit is a key factor in dyslexia.

  17. Perceptions of The Seriousness of Mispronunciations of English Speech Sounds

    Directory of Open Access Journals (Sweden)

    Moedjito Moedjito

    2006-01-01

    Full Text Available The present study attempts to investigate Indonesian EFL teachers’ and native English speakers’ perceptions of mispronunciations of English sounds by Indonesian EFL learners. For this purpose, a paper-form questionnaire consisting of 32 target mispronunciations was distributed to Indonesian secondary school teachers of English and also to native English speakers. An analysis of the respondents’ perceptions has discovered that 14 out of the 32 target mispronunciations are pedagogically significant in pronunciation instruction. A further analysis of the reasons for these major mispronunciations has reconfirmed the prevalence of interference of learners’ native language in their English pronunciation as a major cause of mispronunciations. It has also revealed Indonesian EFL teachers’ tendency to overestimate the seriousness of their learners’ pronunciations. Based on these findings, the study makes suggestions for better English pronunciation teaching in Indonesia or other EFL countries.

  18. Perceptually Salient Sound Distortions and Apraxia of Speech: A Performance Continuum.

    Science.gov (United States)

    Haley, Katarina L; Jacks, Adam; Richardson, Jessica D; Wambaugh, Julie L

    2017-06-22

    We sought to characterize articulatory distortions in apraxia of speech and aphasia with phonemic paraphasia and to evaluate the diagnostic validity of error frequency of distortion and distorted substitution in differentiating between these disorders. Study participants were 66 people with speech sound production difficulties after left-hemisphere stroke or trauma. They were divided into 2 groups on the basis of word syllable duration, which served as an external criterion for speaking rate in multisyllabic words and an index of likely speech diagnosis. Narrow phonetic transcriptions were completed for audio-recorded clinical motor speech evaluations, using 29 diacritic marks. Partial voicing and altered vowel tongue placement were common in both groups, and changes in consonant manner and place were also observed. The group with longer word syllable duration produced significantly more distortion and distorted-substitution errors than did the group with shorter word syllable duration, but variations were distributed on a performance continuum that overlapped substantially between groups. Segment distortions in focal left-hemisphere lesions can be captured with a customized set of diacritic marks. Frequencies of distortions and distorted substitutions are valid diagnostic criteria for apraxia of speech, but further development of quantitative criteria and dynamic performance profiles is necessary for clinical utility.

  19. The phonological memory profile of preschool children who make atypical speech sound errors.

    Science.gov (United States)

    Waring, Rebecca; Eadie, Patricia; Rickard Liow, Susan; Dodd, Barbara

    2018-01-01

    Previous research indicates that children with speech sound disorders (SSD) have underlying phonological memory deficits. The SSD population, however, is diverse. While children who make consistent atypical speech errors (phonological disorder/PhDis) are known to have executive function deficits in rule abstraction and cognitive flexibility, little is known about their memory profile. Sixteen monolingual preschool children with atypical speech errors (PhDis) were matched individually to age-and-gender peers with typically developing speech (TDS). The two groups were compared on forward recall of familiar words (pointing response), reverse recall of familiar words (pointing response), and reverse recall of digits (spoken response) and a receptive vocabulary task. There were no differences between children with TDS and children with PhDis on forward recall or vocabulary tasks. However, children with TDS significantly outperformed children with PhDis on the two reverse recall tasks. Findings suggest that atypical speech errors are associated with impaired phonological working memory, implicating executive function impairment in specific subtypes of SSD.

  20. IEP goals for school-age children with speech sound disorders.

    Science.gov (United States)

    Farquharson, Kelly; Tambyraja, Sherine R; Justice, Laura M; Redle, Erin E

    2014-01-01

    The purpose of the current study was to describe the current state of practice for writing Individualized Education Program (IEP) goals for children with speech sound disorders (SSDs). IEP goals for 146 children receiving services for SSDs within public school systems across two states were coded for their dominant theoretical framework and overall quality. A dichotomous scheme was used for theoretical framework coding: cognitive-linguistic or sensory-motor. Goal quality was determined by examining 7 specific indicators outlined by an empirically tested rating tool. In total, 147 long-term and 490 short-term goals were coded. The results revealed no dominant theoretical framework for long-term goals, whereas short-term goals largely reflected a sensory-motor framework. In terms of quality, the majority of speech production goals were functional and generalizable in nature, but were not able to be easily targeted during common daily tasks or by other members of the IEP team. Short-term goals were consistently rated higher in quality domains when compared to long-term goals. The current state of practice for writing IEP goals for children with SSDs indicates that theoretical framework may be eclectic in nature and likely written to support the individual needs of children with speech sound disorders. Further investigation is warranted to determine the relations between goal quality and child outcomes. (1) Identify two predominant theoretical frameworks and discuss how they apply to IEP goal writing. (2) Discuss quality indicators as they relate to IEP goals for children with speech sound disorders. (3) Discuss the relationship between long-term goals level of quality and related theoretical frameworks. (4) Identify the areas in which business-as-usual IEP goals exhibit strong quality.

  1. Residual neural processing of musical sound features in adult cochlear implant users

    Directory of Open Access Journals (Sweden)

    Lydia eTimm

    2014-04-01

    Full Text Available AbstractAuditory processing in general and music perception in particular are hampered in adult Cochlear Implant (CI users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioural study comparing adult CI users with normal-hearing age-matched controls (NH controls. We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 minutes. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to Deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timbre. No other group differences in MMN parameters were found to changes in intensity and saxophone timbre. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioural and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood.

  2. Are the Literacy Difficulties That Characterize Developmental Dyslexia Associated with a Failure to Integrate Letters and Speech Sounds?

    Science.gov (United States)

    Nash, Hannah M.; Gooch, Debbie; Hulme, Charles; Mahajan, Yatin; McArthur, Genevieve; Steinmetzger, Kurt; Snowling, Margaret J.

    2017-01-01

    The "automatic letter-sound integration hypothesis" (Blomert, [Blomert, L., 2011]) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio-visual objects. We tested this hypothesis in a sample of English-speaking children with dyslexic difficulties (N = 13) and samples of…

  3. Idiosyncratic sound systems of the South African Bantu languages: Research and clinical implications for speech-language pathologists and audiologists

    Directory of Open Access Journals (Sweden)

    Anita van der Merwe

    2014-12-01

    Full Text Available The objective of this article is to create awareness amongst speech-language pathologists and audiologists in South Africa regarding the difference between the sound systems of Germanic languages and the sound systems of South African Bantu languages. A brief overview of the sound systems of two Bantu languages, namely isiZulu and Setswana, is provided. These two languages are representative of the Nguni language group and the Sotho group respectively.Consideration is given to the notion of language-specific symptoms of speech, language and hearing disorders in addition to universal symptoms. The possible impact of speech production, language and hearing disorders on the ability to produce and perceive speech in these languages, and the challenges that this holds for research and clinical practice, are pointed out.

  4. Idiosyncratic sound systems of the South African Bantu languages: Research and clinical implications for speech-language pathologists and audiologists.

    Science.gov (United States)

    Van der Merwe, Anita; le Roux, Mia

    2014-12-03

    The objective of this article is to create awareness amongst speech-language pathologists and audiologists in South Africa regarding the difference between the sound systems of Germanic languages and the sound systems of South African Bantu languages. A brief overview of the sound systems of two Bantu languages, namely isiZulu and Setswana, is provided. These two languages are representative of the Nguni language group and the Sotho group respectively.Consideration is given to the notion of language-specific symptoms of speech, language and hearing disorders in addition to universal symptoms. The possible impact of speech production, language and hearing disorders on the ability to produce and perceive speech in these languages, and the challenges that this holds for research and clinical practice, are pointed out.

  5. Processing of self-initiated speech-sounds is different in musicians.

    Science.gov (United States)

    Ott, Cyrill G M; Jäncke, Lutz

    2013-01-01

    Musicians and musically untrained individuals have been shown to differ in a variety of functional brain processes such as auditory analysis and sensorimotor interaction. At the same time, internally operating forward models are assumed to enable the organism to discriminate the sensory outcomes of self-initiated actions from other sensory events by deriving predictions from efference copies of motor commands about forthcoming sensory consequences. As a consequence, sensory responses to stimuli that are triggered by a self-initiated motor act are suppressed relative to the same but externally initiated stimuli, a phenomenon referred to as motor-induced suppression (MIS) of sensory cortical feedback. Moreover, MIS in the auditory domain has been shown to be modulated by the predictability of certain properties such as frequency or stimulus onset. The present study compares auditory processing of predictable and unpredictable self-initiated 0-delay speech sounds and piano tones between musicians and musical laymen by means of an event-related potential (ERP) and topographic pattern analysis (TPA) [microstate analysis or evoked potential (EP) mapping] approach. As in previous research on the topic of MIS, the amplitudes of the auditory event-related potential (AEP) N1 component were significantly attenuated for predictable and unpredictable speech sounds in both experimental groups to a comparable extent. On the other hand, AEP N1 amplitudes were enhanced for unpredictable self-initiated piano tones in both experimental groups similarly and MIS did not develop for predictable self-initiated piano tones at all. The more refined EP mapping revealed that the microstate exhibiting a typical auditory N1-like topography was significantly shorter in musicians when speech sounds and piano tones were self-initiated and predictable. In contrast, non-musicians only exhibited shorter auditory N1-like microstate durations in response to self-initiated and predictable piano tones

  6. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  7. Differential Diagnosis of Speech Sound Disorder (Phonological Disorder): Audiological Assessment beyond the Pure-tone Audiogram.

    Science.gov (United States)

    Iliadou, Vasiliki Vivian; Chermak, Gail D; Bamiou, Doris-Eva

    2015-04-01

    According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, diagnosis of speech sound disorder (SSD) requires a determination that it is not the result of other congenital or acquired conditions, including hearing loss or neurological conditions that may present with similar symptomatology. To examine peripheral and central auditory function for the purpose of determining whether a peripheral or central auditory disorder was an underlying factor or contributed to the child's SSD. Central auditory processing disorder clinic pediatric case reports. Three clinical cases are reviewed of children with diagnosed SSD who were referred for audiological evaluation by their speech-language pathologists as a result of slower than expected progress in therapy. Audiological testing revealed auditory deficits involving peripheral auditory function or the central auditory nervous system. These cases demonstrate the importance of increasing awareness among professionals of the need to fully evaluate the auditory system to identify auditory deficits that could contribute to a patient's speech sound (phonological) disorder. Audiological assessment in cases of suspected SSD should not be limited to pure-tone audiometry given its limitations in revealing the full range of peripheral and central auditory deficits, deficits which can compromise treatment of SSD. American Academy of Audiology.

  8. Using ultrasound visual biofeedback to treat persistent primary speech sound disorders.

    Science.gov (United States)

    Cleland, Joanne; Scobbie, James M; Wrench, Alan A

    2015-01-01

    Growing evidence suggests that speech intervention using visual biofeedback may benefit people for whom visual skills are stronger than auditory skills (for example, the hearing-impaired population), especially when the target articulation is hard to describe or see. Diagnostic ultrasound can be used to image the tongue and has recently become more compact and affordable leading to renewed interest in it as a practical, non-invasive visual biofeedback tool. In this study, we evaluate its effectiveness in treating children with persistent speech sound disorders that have been unresponsive to traditional therapy approaches. A case series of seven different children (aged 6-11) with persistent speech sound disorders were evaluated. For each child, high-speed ultrasound (121 fps), audio and lip video recordings were made while probing each child's specific errors at five different time points (before, during and after intervention). After intervention, all the children made significant progress on targeted segments, evidenced by both perceptual measures and changes in tongue-shape.

  9. Effects of caffeine treatment for apnea of prematurity on cortical speech-sound differentiation in preterm infants.

    Science.gov (United States)

    Maitre, Nathalie L; Chan, Jeremy; Stark, Ann R; Lambert, Warren E; Aschner, Judy L; Key, Alexandra P

    2015-03-01

    Caffeine, standard treatment for apnea of prematurity, improves brainstem auditory processing. We hypothesized that caffeine also improves cortical differentiation of complex speech sounds. We used event-related potential methodology to measure responses to speech-sound contrasts in 45 intensive care neonates, stratified by cumulative exposure as no-, low-, and high-caffeine groups. Sound differentiation in the low-caffeine group and near-term no-caffeine infants was similar with repeated measures analysis of variance controlling for gestational and postnatal age. However, a generalized estimating equation approach demonstrated that, at equivalent postnatal age, differentiation was reduced in the high-caffeine (gestational age 25 weeks) compared to the low-caffeine group (gestational age 28 weeks), reflecting the importance of maturity at birth (Z = 2.77, P apnea of prematurity cannot fully compensate for the effects of brain immaturity on speech sound processing. © The Author(s) 2014.

  10. What characterizes changing-state speech in affecting short-term memory? An EEG study on the irrelevant sound effect.

    Science.gov (United States)

    Schlittmeier, Sabine J; Weisz, Nathan; Bertrand, Olivier

    2011-12-01

    The irrelevant sound effect (ISE) describes reduced verbal short-term memory during irrelevant changing-state sounds which consist of different and distinct auditory tokens. Steady-state sounds lack such changing-state features and do not impair performance. An EEG experiment (N=16) explored the distinguishing neurophysiological aspects of detrimental changing-state speech (3-token sequence) compared to ineffective steady-state speech (1-token sequence) on serial recall performance. We analyzed evoked and induced activity related to the memory items as well as spectral activity during the retention phase. The main finding is that the behavioral sound effect was exclusively reflected by attenuated token-induced gamma activation most pronounced between 50-60 Hz and 50-100 ms post-stimulus onset. Changing-state speech seems to disrupt a behaviorally relevant ongoing process during target presentation (e.g., the serial binding of the items). Copyright © 2011 Society for Psychophysiological Research.

  11. Implementation fidelity of a computer-assisted intervention for children with speech sound disorders.

    Science.gov (United States)

    McCormack, Jane; Baker, Elise; Masso, Sarah; Crowe, Kathryn; McLeod, Sharynne; Wren, Yvonne; Roulstone, Sue

    2017-06-01

    Implementation fidelity refers to the degree to which an intervention or programme adheres to its original design. This paper examines implementation fidelity in the Sound Start Study, a clustered randomised controlled trial of computer-assisted support for children with speech sound disorders (SSD). Sixty-three children with SSD in 19 early childhood centres received computer-assisted support (Phoneme Factory Sound Sorter [PFSS] - Australian version). Educators facilitated the delivery of PFSS targeting phonological error patterns identified by a speech-language pathologist. Implementation data were gathered via (1) the computer software, which recorded when and how much intervention was completed over 9 weeks; (2) educators' records of practice sessions; and (3) scoring of fidelity (intervention procedure, competence and quality of delivery) from videos of intervention sessions. Less than one-third of children received the prescribed number of days of intervention, while approximately one-half participated in the prescribed number of intervention plays. Computer data differed from educators' data for total number of days and plays in which children participated; the degree of match was lower as data became more specific. Fidelity to intervention procedures, competency and quality of delivery was high. Implementation fidelity may impact intervention outcomes and so needs to be measured in intervention research; however, the way in which it is measured may impact on data.

  12. Sleep duration predicts behavioral and neural differences in adult speech sound learning.

    Science.gov (United States)

    Earle, F Sayako; Landi, Nicole; Myers, Emily B

    2017-01-01

    Sleep is important for memory consolidation and contributes to the formation of new perceptual categories. This study examined sleep as a source of variability in typical learners' ability to form new speech sound categories. We trained monolingual English speakers to identify a set of non-native speech sounds at 8PM, and assessed their ability to identify and discriminate between these sounds immediately after training, and at 8AM on the following day. We tracked sleep duration overnight, and found that light sleep duration predicted gains in identification performance, while total sleep duration predicted gains in discrimination ability. Participants obtained an average of less than 6h of sleep, pointing to the degree of sleep deprivation as a potential factor. Behavioral measures were associated with ERP indexes of neural sensitivity to the learned contrast. These results demonstrate that the relative success in forming new perceptual categories depends on the duration of post-training sleep. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Reading Without Speech Sounds: VWFA and its Connectivity in the Congenitally Deaf.

    Science.gov (United States)

    Wang, Xiaosha; Caramazza, Alfonso; Peelen, Marius V; Han, Zaizhu; Bi, Yanchao

    2015-09-01

    The placement and development of the visual word form area (VWFA) have commonly been assumed to depend, in part, on its connections with language regions. In this study, we specifically examined the effects of auditory speech experience deprivation in shaping the VWFA by investigating its location distribution, activation strength, and functional connectivity pattern in congenitally deaf participants. We found that the location and activation strength of the VWFA in congenitally deaf participants were highly comparable with those of hearing controls. Furthermore, while the congenitally deaf group showed reduced resting-state functional connectivity between the VWFA and the auditory speech area in the left anterior superior temporal gyrus, its intrinsic functional connectivity pattern between the VWFA and a fronto-parietal network was similar to that of hearing controls. Taken together, these results suggest that auditory speech experience has consequences for aspects of the word form-speech sound correspondence network, but that such experience does not significantly modulate the VWFA's placement or response strength. This is consistent with the view that the role of the VWFA might be to provide a representation that is suitable for mapping visual word forms onto language-specific gestures without the need to construct an aural representation. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Fricative Contrast and Coarticulation in Children With and Without Speech Sound Disorders.

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa

    2017-06-22

    The purpose of this study was, first, to expand our understanding of typical speech development regarding segmental contrast and anticipatory coarticulation, and second, to explore the potential diagnostic utility of acoustic measures of fricative contrast and anticipatory coarticulation in children with speech sound disorders (SSD). In a cross-sectional design, 10 adults, 17 typically developing children, and 11 children with SSD repeated carrier phrases with novel words with fricatives (/s/, /ʃ/). Dependent measures were 2 ratios derived from spectral mean, obtained from perceptually accurate tokens. Group analyses compared adults and typically developing children; individual children with SSD were compared to their respective typically developing peers. Typically developing children demonstrated smaller fricative acoustic contrast than adults but similar coarticulatory patterns. Three children with SSD showed smaller fricative acoustic contrast than their typically developing peers, and 2 children showed abnormal coarticulation. The 2 children with abnormal coarticulation both had a clinical diagnosis of childhood apraxia of speech; no clear pattern was evident regarding SSD subtype for smaller fricative contrast. Children have not reached adult-like speech motor control for fricative production by age 10 even when fricatives are perceptually accurate. Present findings also suggest that abnormal coarticulation but not reduced fricative contrast is SSD-subtype-specific. S1: https://doi.org/10.23641/asha.5103070. S2 and S3: https://doi.org/10.23641/asha.5106508.

  15. Differences between the production of [s] and [ʃ] in the speech of adults, typically developing children, and children with speech sound disorders: An ultrasound study.

    Science.gov (United States)

    Francisco, Danira Tavares; Wertzner, Haydée Fiszbein

    2017-01-01

    This study describes the criteria that are used in ultrasound to measure the differences between the tongue contours that produce [s] and [ʃ] sounds in the speech of adults, typically developing children (TDC), and children with speech sound disorder (SSD) with the phonological process of palatal fronting. Overlapping images of the tongue contours that resulted from 35 subjects producing the [s] and [ʃ] sounds were analysed to select 11 spokes on the radial grid that were spread over the tongue contour. The difference was calculated between the mean contour of the [s] and [ʃ] sounds for each spoke. A cluster analysis produced groups with some consistency in the pattern of articulation across subjects and differentiated adults and TDC to some extent and children with SSD with a high level of success. Children with SSD were less likely to show differentiation of the tongue contours between the articulation of [s] and [ʃ].

  16. Treatment intensity in everyday clinical management of speech sound disorders in Hong Kong.

    Science.gov (United States)

    To, Carol K S; Law, Thomas; Cheung, Pamela S P

    2012-10-01

    Much evidence supports the efficacy of different treatment approaches for speech sound disorders (SSD) in children. Minimal research in the field has been conducted using treatment intensity as a research variable. This study examined the current practice of speech-language pathologists (SLPs) in Hong Kong regarding the treatment intensity prescribed to children with SSD and potential factors that were associated with the intensity. Participants were 102 SLPs working in different settings in Hong Kong who completed an online questionnaire. SLPs who had a heavier caseload offered significantly less frequent and shorter treatment duration to clients with SSD. Public and private settings differed significantly in treatment duration. Treatment approaches and clinicians' consideration about a client's conditions did not affect treatment intensity. SLPs in Hong Kong do not plan treatment duration and frequency in an evidence-based direction because of their heavy workloads and the dearth of research evidence on treatment intensity to guide their clinical practice.

  17. Longitudinal changes in polysyllable maturity of preschool children with phonologically-based speech sound disorders.

    Science.gov (United States)

    Masso, Sarah; McLeod, Sharynne; Wang, Cen; Baker, Elise; McCormack, Jane

    2017-01-01

    Children's polysyllables were investigated for changes in (1) consonant and vowel accuracy, (2) error frequency and (3) polysyllable maturity over time. Participants were 80 children (4;0-5;4) with phonologically-based speech sound disorders who participated in the Sound Start Study and completed the Polysyllable Preschool Test (Baker, 2013) three times. Polysyllable errors were categorised using the Word-level Analysis of Polysyllables (WAP, Masso, 2016a) and the Framework of Polysyllable Maturity (Framework, Masso, 2016b), which represents five maturity levels (Levels A-E). Participants demonstrated increased polysyllable accuracy over time as measured by consonant and vowel accuracy, and error frequency. Children in Level A, the lowest level of maturity, had frequent deletion errors, alterations of phonotactics and alterations of timing. Participants in Level B were 8.62 times more likely to improve than children in Level A at Time 1. Children who present with frequent deletion errors may be less likely to improve their polysyllable accuracy.

  18. A systematic review and classification of interventions for speech-sound disorder in preschool children.

    Science.gov (United States)

    Wren, Yvonne; Harding, Sam; Goldbart, Juliet; Roulstone, Sue

    2018-01-16

    Multiple interventions have been developed to address speech sound disorder (SSD) in children. Many of these have been evaluated but the evidence for these has not been considered within a model which categorizes types of intervention. The opportunity to carry out a systematic review of interventions for SSD arose as part of a larger scale study of interventions for primary speech and language impairment in preschool children. To review systematically the evidence for interventions for SSD in preschool children and to categorize them within a classification of interventions for SSD. Relevant search terms were used to identify intervention studies published up to 2012, with the following inclusion criteria: participants were aged between 2 years and 5 years, 11 months; they exhibited speech, language and communication needs; and a primary outcome measure of speech was used. Studies that met inclusion criteria were quality appraised using the single case experimental design (SCED) or PEDro-P, depending on their methodology. Those judged to be high quality were classified according to the primary focus of intervention. The final review included 26 studies. Case series was the most common research design. Categorization to the classification system for interventions showed that cognitive-linguistic and production approaches to intervention were the most frequently reported. The highest graded evidence was for three studies within the auditory-perceptual and integrated categories. The evidence for intervention for preschool children with SSD is focused on seven out of 11 subcategories of interventions. Although all the studies included in the review were good quality as defined by quality appraisal checklists, they mostly represented lower-graded evidence. Higher-graded studies are needed to understand clearly the strength of evidence for different interventions. © 2018 Royal College of Speech and Language Therapists.

  19. Clinicians' management of young children with co-occurring stuttering and speech sound disorder.

    Science.gov (United States)

    Unicomb, Rachael; Hewat, Sally; Spencer, Elizabeth; Harrison, Elisabeth

    2013-08-01

    Speech sound disorders reportedly co-occur in young children who stutter at a substantial rate. Despite this, there is a paucity of scientific research available to support a treatment approach when these disorders co-exist. Similarly, little is known about how clinicians are currently working with this caseload given that best practice for the treatment of both disorders in isolation has evolved in recent years. This study used a qualitative approach to explore current clinical management and rationales when working with children who have co-occurring stuttering and speech sound disorder. Thirteen participant SLPs engaged in semi-structured telephone interviews. Interview data were analysed based on principles derived from grounded theory. Several themes were identified including multi-faceted assessment, workplace challenges, weighing-up the evidence, and direct intervention. The core theme, clinical reasoning, highlighted the participants' main concern, that not enough is known about this caseload on which to base decisions about intervention. There was consensus that little is available in the research literature to guide decisions relating to service delivery. These findings highlight the need for further research to provide evidence-based guidelines for clinical practice with this caseload.

  20. Processing of self-initiated speech-sounds is different in musicians

    Directory of Open Access Journals (Sweden)

    Cyrill Guy Martin Ott

    2013-02-01

    Full Text Available Musicians and musically untrained individuals have been shown to differ in a variety of functional brain processes such as auditory analysis and sensorimotor interaction. At the same time, internally operating forward models are assumed to enable the organism to discriminate the sensory outcomes of self-initiated actions from other sensory events by deriving predictions from efference copies of motor commands about forthcoming sensory consequences. As a consequence, sensory responses to stimuli that are triggered by a self-initiated motor act are suppressed relative to the same but externally-initiated stimuli, a phenomenon referred to as motor-induced suppression (MIS of sensory cortical feedback. Moreover, MIS in the auditory domain has been shown to be modulated by the predictability of certain properties such as frequency or stimulus onset. The present study compares auditory processing of predictable and unpredictable self-initiated zero-delay speech sounds and piano tones between musicians and musical laymen by means of an event-related potential (ERP and topographic pattern analysis (microstate analysis or EP mapping approach. Taken together, our findings suggest that besides the known effect of MIS, internally operating forward models also facilitate early acoustic analysis of complex tones by means of faster processing time as indicated by shorter auditory N1-like microstate durations in the first ~ 200 ms after stimulus onset. In addition, musicians seem to profit from this facilitation also during the analysis of speech sounds as indicated by comparable auditory N1-like microstate duration patterns between speech and piano conditions. In contrast, non-musicians did not show such an effect.

  1. Evidence for the treatment of co-occurring stuttering and speech sound disorder: A clinical case series.

    Science.gov (United States)

    Unicomb, Rachael; Hewat, Sally; Spencer, Elizabeth; Harrison, Elisabeth

    2017-06-01

    There is a paucity of evidence to guide treatment for children with co-occurring stuttering and speech sound disorder. Some guidelines suggest treating the two disorders simultaneously using indirect treatment approaches; however, the research supporting these recommendations is over 20 years old. In this clinical case series, we investigate whether these co-occurring disorders could be treated concurrently using direct treatment approaches supported by up-to-date, high-level evidence, and whether this could be done in an efficacious, safe and efficient manner. Five pre-school-aged participants received individual concurrent, direct intervention for both stuttering and speech sound disorder. All participants used the Lidcombe Program, as manualised. Direct treatment for speech sound disorder was individualised based on analysis of each child's sound system. At 12 months post commencement of treatment, all except one participant had completed the Lidcombe Program, and were less than 1.0% syllables stuttered on samples gathered within and beyond the clinic. These four participants completed Stage 1 of the Lidcombe Program in between 14 and 22 clinic visits, consistent with current benchmark data for this programme. At the same assessment point, all five participants exhibited significant increases in percentage of consonants correct and were in alignment with age-expected estimates of this measure. Further, they were treated in an average number of clinic visits that compares favourably with other research on treatment for speech sound disorder. These preliminary results indicate that young children with co-occurring stuttering and speech sound disorder may be treated concurrently using direct treatment approaches. This method of service delivery may have implications for cost and time efficiency and may also address the crucial need for early intervention in both disorders. These positive findings highlight the need for further research in the area and contribute to

  2. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  3. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity.

    Science.gov (United States)

    Warlaumont, Anne S; Finnegan, Megan K

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant's nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model's frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one's own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop in

  4. Preliteracy Speech Sound Production Skill and Linguistic Characteristics of Grade 3 Spellings: A Study Using the Templin Archive.

    Science.gov (United States)

    Overby, Megan S; Masterson, Julie J; Preston, Jonathan L

    2015-12-01

    This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the time data were collected. Participants (N = 250), selected from the Templin Archive (Templin, 2004), varied on prekindergarten SSPS. Participants' real word spellings in Grade 3 were evaluated using a metric of linguistic knowledge, the Computerized Spelling Sensitivity System (Masterson & Apel, 2013). Relationships between kindergarten speech error types and later spellings also were explored. Prekindergarten children in the lowest SPSS (7th percentile) scored poorest among articulatory subgroups on both individual spelling elements (phonetic elements, junctures, and affixes) and acceptable spelling (using relatively more omissions and illegal spelling patterns). Within the 7th percentile subgroup, there were no statistical spelling differences between those with mostly atypical speech sound errors and those with mostly typical speech sound errors. Findings were consistent with predictions from dual route models of spelling that SSPS is one of many variables associated with spelling skill and that children with impaired SSPS are at risk for spelling difficulty.

  5. The influence of (central) auditory processing disorder on the severity of speech-sound disorders in children.

    Science.gov (United States)

    Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede

    2016-02-01

    To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.

  6. The influence of (central auditory processing disorder on the severity of speech-sound disorders in children

    Directory of Open Access Journals (Sweden)

    Nadia Vilela

    2016-02-01

    Full Text Available OBJECTIVE: To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central auditory processing disorder. METHODS: Language, audiological and (central auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central auditory processing evaluation results. RESULTS: When a (central auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central auditory processing disorder. CONCLUSIONS: The severity of speech-sound disorder in children was influenced by the presence of (central auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.

  7. [The possibilities and limitations of the methods for the personality identification from the voice and sounding speech characteristics].

    Science.gov (United States)

    Kir'yanov, P A

    2015-01-01

    This paper is designed to report the results of the comparative analysis of the methods currently employed in the government-controlled and private agencies of forensic medical expertise for the personality identification based on the voice and sounding speech recorded on the phonograms.

  8. The Sound-to-Speech Translations Utilizing Graphics Mediation Interface for Students with Severe Handicaps. Final Report.

    Science.gov (United States)

    Brown, Carrie; And Others

    This final report describes activities and outcomes of a research project on a sound-to-speech translation system utilizing a graphic mediation interface for students with severe disabilities. The STS/Graphics system is a voice recognition, computer-based system designed to allow individuals with mental retardation and/or severe physical…

  9. Intelligibility as a Clinical Outcome Measure Following Intervention with Children with Phonologically Based Speech-Sound Disorders

    Science.gov (United States)

    Lousada, M.; Jesus, Luis M. T.; Hall, A.; Joffe, V.

    2014-01-01

    Background: The effectiveness of two treatment approaches (phonological therapy and articulation therapy) for treatment of 14 children, aged 4;0-6;7 years, with phonologically based speech-sound disorder (SSD) has been previously analysed with severity outcome measures (percentage of consonants correct score, percentage occurrence of phonological…

  10. Evidence for a Familial Speech Sound Disorder Subtype in a Multigenerational Study of Oral and Hand Motor Sequencing Ability

    Science.gov (United States)

    Peter, Beate; Raskind, Wendy H.

    2011-01-01

    Purpose: To evaluate phenotypic expressions of speech sound disorder (SSD) in multigenerational families with evidence of familial forms of SSD. Method: Members of five multigenerational families (N = 36) produced rapid sequences of monosyllables and disyllables and tapped computer keys with repetitive and alternating movements. Results: Measures…

  11. Acquired Apraxia of Speech: The Effects of Repeated Practice and Rate/Rhythm Control Treatments on Sound Production Accuracy

    Science.gov (United States)

    Wambaugh, Julie L.; Nessler, Christina; Cameron, Rosalea; Mauszycki, Shannon C.

    2012-01-01

    Purpose: This investigation was designed to elucidate the effects of repeated practice treatment on sound production accuracy in individuals with apraxia of speech (AOS) and aphasia. A secondary purpose was to determine if the addition of rate/rhythm control to treatment provided further benefits beyond those achieved with repeated practice.…

  12. Predictive Brain Mechanisms in Sound-to-Meaning Mapping during Speech Processing.

    Science.gov (United States)

    Lyu, Bingjiang; Ge, Jianqiao; Niu, Zhendong; Tan, Li Hai; Gao, Jia-Hong

    2016-10-19

    Spoken language comprehension relies not only on the identification of individual words, but also on the expectations arising from contextual information. A distributed frontotemporal network is known to facilitate the mapping of speech sounds onto their corresponding meanings. However, how prior expectations influence this efficient mapping at the neuroanatomical level, especially in terms of individual words, remains unclear. Using fMRI, we addressed this question in the framework of the dual-stream model by scanning native speakers of Mandarin Chinese, a language highly dependent on context. We found that, within the ventral pathway, the violated expectations elicited stronger activations in the left anterior superior temporal gyrus and the ventral inferior frontal gyrus (IFG) for the phonological-semantic prediction of spoken words. Functional connectivity analysis showed that expectations were mediated by both top-down modulation from the left ventral IFG to the anterior temporal regions and enhanced cross-stream integration through strengthened connections between different subregions of the left IFG. By further investigating the dynamic causality within the dual-stream model, we elucidated how the human brain accomplishes sound-to-meaning mapping for words in a predictive manner. In daily communication via spoken language, one of the core processes is understanding the words being used. Effortless and efficient information exchange via speech relies not only on the identification of individual spoken words, but also on the contextual information giving rise to expected meanings. Despite the accumulating evidence for the bottom-up perception of auditory input, it is still not fully understood how the top-down modulation is achieved in the extensive frontotemporal cortical network. Here, we provide a comprehensive description of the neural substrates underlying sound-to-meaning mapping and demonstrate how the dual-stream model functions in the modulation of

  13. Working memory in school-age children with and without a persistent speech sound disorder.

    Science.gov (United States)

    Farquharson, Kelly; Hogan, Tiffany P; Bernthal, John E

    2017-03-17

    The aim of this study was to explore the role of working memory processes as a possible cognitive underpinning of persistent speech sound disorders (SSD). Forty school-aged children were enrolled; 20 children with persistent SSD (P-SSD) and 20 typically developing children. Children participated in three working memory tasks - one to target each of the components in Baddeley's working memory model: phonological loop, visual spatial sketchpad and central executive. Children with P-SSD performed poorly only on the phonological loop tasks compared to their typically developing age-matched peers. However, mediation analyses revealed that the relation between working memory and a P-SSD was reliant upon nonverbal intelligence. These results suggest that co-morbid low-average nonverbal intelligence are linked to poor working memory in children with P-SSD. Theoretical and clinical implications are discussed.

  14. Treating children ages 3-6 who have speech sound disorder: a survey.

    Science.gov (United States)

    Brumbaugh, Klaire Mann; Smit, Ann Bosma

    2013-07-01

    In a national survey, speech-language pathologists (SLPs) were asked about service delivery and interventions they use with children ages 3-6 who have speech sound disorder (SSD). The survey was e-mailed to 2,084 SLPs who worked in pre-elementary settings across the United States. Of these, 24% completed part or all of the survey, with 18% completing the entire survey. SLPs reported that they provided children ages 3-6 who had SSD with 30 or 60 min of treatment time weekly, regardless of group or individual setting. More SLPs indicated that they used traditional intervention than other types of intervention. However, many SLPs also reported using aspects of phonological interventions and providing phonological awareness training. Fewer SLPs indicated that they used nonspeech oral motor exercises than in a previous survey ( Lof & Watson, 2008). Recently graduated SLPs were no more familiar with recent advances in phonological intervention than were their more experienced colleagues. Discussion This study confirms previous findings ( Mullen & Schooling, 2010) about the amount of service provided to children ages 3-6 who have SSD. Issues related to the use of traditional and phonological intervention with children who have phonological disorder are discussed, along with concerns related to evidence-based practice and research needs.

  15. An Australian survey of parent involvement in intervention for childhood speech sound disorders.

    Science.gov (United States)

    Sugden, Eleanor; Baker, Elise; Munro, Natalie; Williams, A Lynn; Trivette, Carol M

    2017-08-17

    To investigate how speech-language pathologists (SLPs) report involving parents in intervention for phonology-based speech sound disorders (SSDs), and to describe the home practice that they recommend. Further aims were to describe the training SLPs report providing to parents, to explore SLPs' beliefs and motivations for involving parents in intervention, and to determine whether SLPs' characteristics are associated with their self-reported practice. An online survey of 288 SLPs working with SSD in Australia was conducted. The majority of SLPs (96.4%) reported involving parents in intervention, most commonly in providing home practice. On average, these tasks were recommended to be completed five times per week for 10 min. SLPs reported training parents using a range of training methods, most commonly providing opportunities for parents to observe the SLP conduct the intervention. SLPs' place of work and years of experience were associated with how they involved and trained parents in intervention. Most (95.8%) SLPs agreed or strongly agreed that family involvement is essential for intervention to be effective. Parent involvement and home practice appear to be intricately linked within intervention for phonology-based SSDs in Australia. More high-quality research is needed to understand how to best involve parents within clinical practice.

  16. Tongue-palate contact during selected vowels in children with speech sound disorders.

    Science.gov (United States)

    Lee, Alice; Gibbon, Fiona E; Kearney, Elaine; Murphy, Doris

    2014-12-01

    There is evidence that complete tongue-palate contact across the palate during production of vowels can be observed in some children with speech disorders associated with cleft palate in the English-speaking and Japanese-speaking populations. Although it has been shown that this is not a feature of typical vowel articulation in English-speaking adults, tongue-palate contact during vowel production in typical children and English-speaking children with speech sound disorders (SSD) have not been reported in detail. Therefore, this study sought to determine whether complete tongue-palate contact occurs during production of five selected vowels in 10 children with SSD and eight typically-developing children. The results showed that none of the typical children had complete contact across the palate during any of the vowels. However, of the 119 vowels produced by the children with SSD, 24% showed complete contact across the palate during at least a portion of the vowel segment. The results from the typically-developing children suggest that complete tongue-palate contact is an atypical articulatory feature. However, the evidence suggests that this pattern occurs relatively frequently in children with SSD. Further research is needed to determine the prevalence, cause, and perceptual consequence of complete tongue-palate contact.

  17. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    Science.gov (United States)

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  18. Involvement of parents in intervention for childhood speech sound disorders: a review of the evidence.

    Science.gov (United States)

    Sugden, Eleanor; Baker, Elise; Munro, Natalie; Williams, A Lynn

    2016-11-01

    Internationally, speech and language therapists (SLTs) are involving parents and providing home tasks in intervention for phonology-based speech sound disorder (SSD). To ensure that SLTs' involvement of parents is guided by empirical research, a review of peer-reviewed published evidence is needed. To provide SLTs and researchers with a comprehensive appraisal and analysis of peer-reviewed published intervention research reporting parent involvement and the provision of home tasks in intervention studies for children with phonology-based SSD. A systematic search and review was conducted. Academic databases were searched for peer-reviewed research papers published between 1979 and 2013 reporting on phonological intervention for SSD. Of the 176 papers that met the criteria, 61 were identified that reported on the involvement of parents and/or home tasks within the intervention. These papers were analysed using a quality appraisal tool. Details regarding the involvement of parents and home tasks were extracted and analysed to provide a summary of these practices within the evidence base. Parents have been involved in intervention research for phonology-based SSD. However, most of the peer-reviewed published papers reporting this research have provided limited details regarding what this involved. This paucity of information presents challenges for SLTs wishing to integrate external evidence into their clinical services and clinical decision-making. It also raises issues regarding treatment fidelity for researchers wishing to replicate published intervention research. The range of tasks in which parents were involved, and the limited details reported in the literature, present challenges for SLTs wanting to involve parents in intervention. Further high-quality research reporting more detail regarding the involvement of parents and home tasks in intervention for SSD is needed. © 2016 Royal College of Speech and Language Therapists.

  19. Investigating the neural correlates of voice versus speech-sound directed information in pre-school children.

    Directory of Open Access Journals (Sweden)

    Nora Maria Raschle

    Full Text Available Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI in 20 typically developing preschool children (average age  = 5.8 y; range 5.2-6.8 y to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments.

  20. Investigating the neural correlates of voice versus speech-sound directed information in pre-school children.

    Science.gov (United States)

    Raschle, Nora Maria; Smith, Sara Ashley; Zuk, Jennifer; Dauvermann, Maria Regina; Figuccio, Michael Joseph; Gaab, Nadine

    2014-01-01

    Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI) in 20 typically developing preschool children (average age  = 5.8 y; range 5.2-6.8 y) to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments.

  1. Evaluation of Speech Recognition of Cochlear Implant Recipients Using Adaptive, Digital Remote Microphone Technology and a Speech Enhancement Sound Processing Algorithm.

    Science.gov (United States)

    Wolfe, Jace; Morais, Mila; Schafer, Erin; Agrawal, Smita; Koch, Dawn

    2015-05-01

    Cochlear implant recipients often experience difficulty with understanding speech in the presence of noise. Cochlear implant manufacturers have developed sound processing algorithms designed to improve speech recognition in noise, and research has shown these technologies to be effective. Remote microphone technology utilizing adaptive, digital wireless radio transmission has also been shown to provide significant improvement in speech recognition in noise. There are no studies examining the potential improvement in speech recognition in noise when these two technologies are used simultaneously. The goal of this study was to evaluate the potential benefits and limitations associated with the simultaneous use of a sound processing algorithm designed to improve performance in noise (Advanced Bionics ClearVoice) and a remote microphone system that incorporates adaptive, digital wireless radio transmission (Phonak Roger). A two-by-two way repeated measures design was used to examine performance differences obtained without these technologies compared to the use of each technology separately as well as the simultaneous use of both technologies. Eleven Advanced Bionics (AB) cochlear implant recipients, ages 11 to 68 yr. AzBio sentence recognition was measured in quiet and in the presence of classroom noise ranging in level from 50 to 80 dBA in 5-dB steps. Performance was evaluated in four conditions: (1) No ClearVoice and no Roger, (2) ClearVoice enabled without the use of Roger, (3) ClearVoice disabled with Roger enabled, and (4) simultaneous use of ClearVoice and Roger. Speech recognition in quiet was better than speech recognition in noise for all conditions. Use of ClearVoice and Roger each provided significant improvement in speech recognition in noise. The best performance in noise was obtained with the simultaneous use of ClearVoice and Roger. ClearVoice and Roger technology each improves speech recognition in noise, particularly when used at the same time

  2. The effect of F0 contour on the intelligibility of speech in the presence of interfering sounds for Mandarin Chinese.

    Science.gov (United States)

    Chen, Jing; Yang, Hongying; Wu, Xihong; Moore, Brian C J

    2018-02-01

    In Mandarin Chinese, the fundamental frequency (F0) contour defines lexical "Tones" that differ in meaning despite being phonetically identical. Flattening the F0 contour impairs the intelligibility of Mandarin Chinese in background sounds. This might occur because the flattening introduces misleading lexical information. To avoid this effect, two types of speech were used: single-Tone speech contained Tones 1 and 0 only, which have a flat F0 contour; multi-Tone speech contained all Tones and had a varying F0 contour. The intelligibility of speech in steady noise was slightly better for single-Tone speech than for multi-Tone speech. The intelligibility of speech in a two-talker masker, with the difference in mean F0 between the target and masker matched across conditions, was worse for the multi-Tone target in the multi-Tone masker than for any other combination of target and masker, probably because informational masking was maximal for this combination. The introduction of a perceived spatial separation between the target and masker, via the precedence effect, led to better performance for all target-masker combinations, especially the multi-Tone target in the multi-Tone masker. In summary, a flat F0 contour does not reduce the intelligibility of Mandarin Chinese when the introduction of misleading lexical cues is avoided.

  3. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  4. Individual differences in the discrimination of novel speech sounds: effects of sex, temporal processing, musical and cognitive abilities.

    Science.gov (United States)

    Kempe, Vera; Thoresen, John C; Kirk, Neil W; Schaeffler, Felix; Brooks, Patricia J

    2012-01-01

    This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N=120) discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds.

  5. Musicians' Enhanced Neural Differentiation of Speech Sounds Arises Early in Life: Developmental Evidence from Ages 3 to 30

    Science.gov (United States)

    Strait, Dana L.; O'Connell, Samantha; Parbery-Clark, Alexandra; Kraus, Nina

    2014-01-01

    The perception and neural representation of acoustically similar speech sounds underlie language development. Music training hones the perception of minute acoustic differences that distinguish sounds; this training may generalize to speech processing given that adult musicians have enhanced neural differentiation of similar speech syllables compared with nonmusicians. Here, we asked whether this neural advantage in musicians is present early in life by assessing musically trained and untrained children as young as age 3. We assessed auditory brainstem responses to the speech syllables /ba/ and /ga/ as well as auditory and visual cognitive abilities in musicians and nonmusicians across 3 developmental time-points: preschoolers, school-aged children, and adults. Cross-phase analyses objectively measured the degree to which subcortical responses differed to these speech syllables in musicians and nonmusicians for each age group. Results reveal that musicians exhibit enhanced neural differentiation of stop consonants early in life and with as little as a few years of training. Furthermore, the extent of subcortical stop consonant distinction correlates with auditory-specific cognitive abilities (i.e., auditory working memory and attention). Results are interpreted according to a corticofugal framework for auditory learning in which subcortical processing enhancements are engendered by strengthened cognitive control over auditory function in musicians. PMID:23599166

  6. Musicians' enhanced neural differentiation of speech sounds arises early in life: developmental evidence from ages 3 to 30.

    Science.gov (United States)

    Strait, Dana L; O'Connell, Samantha; Parbery-Clark, Alexandra; Kraus, Nina

    2014-09-01

    The perception and neural representation of acoustically similar speech sounds underlie language development. Music training hones the perception of minute acoustic differences that distinguish sounds; this training may generalize to speech processing given that adult musicians have enhanced neural differentiation of similar speech syllables compared with nonmusicians. Here, we asked whether this neural advantage in musicians is present early in life by assessing musically trained and untrained children as young as age 3. We assessed auditory brainstem responses to the speech syllables /ba/ and /ga/ as well as auditory and visual cognitive abilities in musicians and nonmusicians across 3 developmental time-points: preschoolers, school-aged children, and adults. Cross-phase analyses objectively measured the degree to which subcortical responses differed to these speech syllables in musicians and nonmusicians for each age group. Results reveal that musicians exhibit enhanced neural differentiation of stop consonants early in life and with as little as a few years of training. Furthermore, the extent of subcortical stop consonant distinction correlates with auditory-specific cognitive abilities (i.e., auditory working memory and attention). Results are interpreted according to a corticofugal framework for auditory learning in which subcortical processing enhancements are engendered by strengthened cognitive control over auditory function in musicians. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Ingressive Speech Errors: A Service Evaluation of Speech-Sound Therapy in a Child Aged 4;6

    Science.gov (United States)

    Hrastelj, Laura; Knight, Rachael-Anne

    2017-01-01

    Background: A pattern of ingressive substitutions for word-final sibilants can be identified in a small number of cases in child speech disorder, with growing evidence suggesting it is a phonological difficulty, despite the unusual surface form. Phonological difficulty implies a problem with the cognitive process of organizing speech into sound…

  8. Problems in speech sound production in young children. An inventory study of the opinions of speech therapists

    NARCIS (Netherlands)

    Priester, G.H.; Post, W.J.; Goorhuis-Brouwer, S.M.

    Objective: Analysis of examination procedure and diagnosis of articulation problems by speech therapists. Study design: Survey study. Materials and methods: Eighty-five Dutch speech therapists (23% response), working in private practises or involved in language screening procedures in Youth Health

  9. Sensitivity and specificity of the Percentage of Consonants Correct-Revised in the identification of speech sound disorder.

    Science.gov (United States)

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Pinheiro da Silva, Joyce; Wertzner, Haydée Fiszbein

    2017-05-22

    The purpose of the study was to determine the sensitivity and specificity, and to establish cutoff points for the severity index Percentage of Consonants Correct - Revised (PCC-R) in Brazilian Portuguese-speaking children with and without speech sound disorders. 72 children between 5:00 and 7:11 years old - 36 children without speech and language complaints and 36 children with speech sound disorders. The PCC-R was applied to the figure naming and word imitation tasks that are part of the ABFW Child Language Test. Results were statistically analyzed. The ROC curve was performed and sensitivity and specificity values ​​of the index were verified. The group of children without speech sound disorders presented greater PCC-R values in both tasks, regardless of the gender of the participants. The cutoff value observed for the picture naming task was 93.4%, with a sensitivity value of 0.89 and specificity of 0.94 (age independent). For the word imitation task, results were age-dependent: for age group ≤6:5 years old, the cutoff value was 91.0% (sensitivity of 0.77 and specificity of 0.94) and for age group >6:5 years-old, the cutoff value was 93.9% (sensitivity of 0.93 and specificity of 0.94). Given the high sensitivity and specificity of PCC-R, we can conclude that the index was effective in discriminating and identifying children with and without speech sound disorders.

  10. Sensitivity of cortical auditory evoked potential detection for hearing-impaired infants in response to short speech sounds

    Directory of Open Access Journals (Sweden)

    Bram Van Dun

    2012-01-01

    Full Text Available

    Background: Cortical auditory evoked potentials (CAEPs are an emerging tool for hearing aid fitting evaluation in young children who cannot provide reliable behavioral feedback. It is therefore useful to determine the relationship between the sensation level of speech sounds and the detection sensitivity of CAEPs.

    Design and methods: Twenty-five sensorineurally hearing impaired infants with an age range of 8 to 30 months were tested once, 18 aided and 7 unaided. First, behavioral thresholds of speech stimuli /m/, /g/, and /t/ were determined using visual reinforcement orientation audiometry (VROA. Afterwards, the same speech stimuli were presented at 55, 65, and 75 dB SPL, and CAEP recordings were made. An automatic statistical detection paradigm was used for CAEP detection.

    Results: For sensation levels above 0, 10, and 20 dB respectively, detection sensitivities were equal to 72 ± 10, 75 ± 10, and 78 ± 12%. In 79% of the cases, automatic detection p-values became smaller when the sensation level was increased by 10 dB.

    Conclusions: The results of this study suggest that the presence or absence of CAEPs can provide some indication of the audibility of a speech sound for infants with sensorineural hearing loss. The detection of a CAEP provides confidence, to a degree commensurate with the detection probability, that the infant is detecting that sound at the level presented. When testing infants where the audibility of speech sounds has not been established behaviorally, the lack of a cortical response indicates the possibility, but by no means a certainty, that the sensation level is 10 dB or less.

  11. Attention-related modulation of auditory-cortex responses to speech sounds during dichotic listening.

    Science.gov (United States)

    Alho, Kimmo; Salonen, Johanna; Rinne, Teemu; Medvedev, Svyatoslav V; Hugdahl, Kenneth; Hämäläinen, Heikki

    2012-03-09

    Event-related magnetic fields (ERFs) were measured with magnetoencephalography (MEG) in fifteen healthy right-handed participants listening to sequences of consonant-vowel syllable pairs delivered dichotically (one syllable presented to the left ear and another syllable simultaneously to the right ear). The participants were instructed to press a response button to occurrences of a particular target syllable. In a condition with no other instruction (the non-forced condition, NF), they showed the well-known right-ear advantage (REA), that is, the participants responded more often to target syllables delivered to the right ear than to targets delivered to the left ear. The same was true in the forced-right (FR) condition, where the participants were instructed to attend selectively to the right-ear syllables and respond only to targets among them. In the forced-left (FL) condition, where they were instructed to respond only to left-ear targets, they responded more often to targets in this ear than to targets in the right ear. At 300-500 ms from syllable pair onset, a sustained field (SF) in ERFs to the syllable pairs was stronger in the left auditory cortex than in the right auditory cortex in the NF and FR conditions, while the opposite was true in the FL condition. Thus selective attention during dichotic listening leads to stronger processing of speech sounds in the auditory cortex contralateral to the attended direction. Our results also suggest that the REA observed for dichotic speech may involve a bias of attention to the right side even when there is no instruction to do so. This supports Kinsbourne's (1970) model of attention bias as a general principle of laterality. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Cross-linguistic generalization in the treatment of two sequential Spanish-English bilingual children with speech sound disorders.

    Science.gov (United States)

    Gildersleeve-Neumann, Christina; Goldstein, Brian A

    2015-02-01

    The effect of bilingual service delivery on treatment of speech sound disorders (SSDs) in bilingual children is largely unknown. Bilingual children with SSDs are typically provided intervention in only one language, although research suggests dual-language instruction for language disorders is best practice for bilinguals. This study examined cross-linguistic generalization of bilingual intervention in treatment of two 5-year-old sequential bilingual boys with SSDs (one with Childhood Apraxia of Speech), hypothesizing that selecting and treating targets in both languages would result in significant overall change in their English and Spanish speech systems. A multiple baseline across behaviours design was used to measure treatment effectiveness for two targets per child. Children received treatment 2-3 times per week for 8 weeks and in Spanish for at least 2 of every 3 days. Ongoing treatment performance was measured in probes in both languages; overall speech skills were compared pre- and post-treatment. Both children's speech improved in both languages with similar magnitude; there was improvement in some non-treated errors. Treating both languages had an overall positive effect on these bilingual children's speech. Future bilingual intervention research should explore alternating treatments designs, efficiency of monolingual vs bilingual treatment, different language and bilingual backgrounds, and between-group comparisons.

  13. Intelligibility in speech maskers with a binaural cochlear implant sound coding strategy inspired by the contralateral medial olivocochlear reflex.

    Science.gov (United States)

    Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Gorospe, José M; Ruiz, Santiago Santa Cruz; Benito, Fernando; Wilson, Blake S

    2017-05-01

    We have recently proposed a binaural cochlear implant (CI) sound processing strategy inspired by the contralateral medial olivocochlear reflex (the MOC strategy) and shown that it improves intelligibility in steady-state noise (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). The aim here was to evaluate possible speech-reception benefits of the MOC strategy for speech maskers, a more natural type of interferer. Speech reception thresholds (SRTs) were measured in six bilateral and two single-sided deaf CI users with the MOC strategy and with a standard (STD) strategy. SRTs were measured in unilateral and bilateral listening conditions, and for target and masker stimuli located at azimuthal angles of (0°, 0°), (-15°, +15°), and (-90°, +90°). Mean SRTs were 2-5 dB better with the MOC than with the STD strategy for spatially separated target and masker sources. For bilateral CI users, the MOC strategy (1) facilitated the intelligibility of speech in competition with spatially separated speech maskers in both unilateral and bilateral listening conditions; and (2) led to an overall improvement in spatial release from masking in the two listening conditions. Insofar as speech is a more natural type of interferer than steady-state noise, the present results suggest that the MOC strategy holds potential for promising outcomes for CI users. Copyright © 2017. Published by Elsevier B.V.

  14. Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception.

    Science.gov (United States)

    Coffey, Emily B J; Chepesiuk, Alexander M P; Herholz, Sibylle C; Baillet, Sylvain; Zatorre, Robert J

    2017-01-01

    Speech-in-noise (SIN) perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response) contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions.

  15. Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception

    Directory of Open Access Journals (Sweden)

    Emily B. J. Coffey

    2017-08-01

    Full Text Available Speech-in-noise (SIN perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions.

  16. Auditory and visual sustained attention in children with speech sound disorder.

    Directory of Open Access Journals (Sweden)

    Cristina F B Murphy

    Full Text Available Although research has demonstrated that children with specific language impairment (SLI and reading disorder (RD exhibit sustained attention deficits, no study has investigated sustained attention in children with speech sound disorder (SSD. Given the overlap of symptoms, such as phonological memory deficits, between these different language disorders (i.e., SLI, SSD and RD and the relationships between working memory, attention and language processing, it is worthwhile to investigate whether deficits in sustained attention also occur in children with SSD. A total of 55 children (18 diagnosed with SSD (8.11 ± 1.231 and 37 typically developing children (8.76 ± 1.461 were invited to participate in this study. Auditory and visual sustained-attention tasks were applied. Children with SSD performed worse on these tasks; they committed a greater number of auditory false alarms and exhibited a significant decline in performance over the course of the auditory detection task. The extent to which performance is related to auditory perceptual difficulties and probable working memory deficits is discussed. Further studies are needed to better understand the specific nature of these deficits and their clinical implications.

  17. The effect of different speaker accents on sentence comprehension in children with speech sound disorder.

    Science.gov (United States)

    Harte, Jennifer; Frizelle, Pauline; Gibbon, Fiona

    2017-12-26

    There is substantial evidence that a speaker's accent, specifically an unfamiliar accent, can affect the listener's comprehension. In general, this effect holds true for both adults and children as well as those with typical and impaired language. Previous studies have investigated the effect of different accents on individuals with language disorders, but children with speech sound disorders (SSDs) have received little attention. The current study aims to learn more about the ability of children with SSD to process different speaker accents. Fifteen children with SSD aged between 4;01 and 5;11 years, and 16 typically developing children matched on language ability, age, socioeconomic status, gender and cognitive ability participated in the current study. A sentence comprehension task was carried out with each child, requiring them to follow instructions of increasing length spoken in three different accents - (i) a local Irish (Cork) accent, (ii) a regional North American accent and (iii) a non-native Indian English accent. Results showed no significant group difference and speaker accent did not significantly impact children's performance on the task. The results are discussed in relation to factors that influence accent comprehension, and their implications for children's underlying phonological representations.

  18. A longitudinal investigation of morpho-syntax in children with Speech Sound Disorders.

    Science.gov (United States)

    Mortimer, Jennifer; Rvachew, Susan

    2010-01-01

    The intent of this study was to examine the longitudinal morpho-syntactic progression of children with Speech Sound Disorders (SSD) grouped according to Mean Length of Utterance (MLU) scores. Thirty-seven children separated into four clusters were assessed in their pre-kindergarten and Grade 1 years. Cluster 1 were children with typical development; the other clusters were children with SSD. Cluster 2 had good pre-kindergarten MLU; Clusters 3 and 4 had low MLU scores in pre-kindergarten, and (respectively) good and poor MLU outcomes. Children with SSD in pre-kindergarten had lower Developmental Sentence Scores (DSS) and made fewer attempts at finite embedded clauses than children with typical development. All children with SSD, especially Cluster 4, had difficulty with finite verb morphology. Children with SSD and typical MLU may be weak in some areas of syntax. Children with SSD who have low MLU scores and poor finite verb morphology skills in pre-kindergarten may be at risk for poor expressive language outcomes. However, these results need to be replicated with larger groups. The reader should (1) have a general understanding of findings from studies on morpho-syntax and SSD conducted over the last half century (2) be aware of some potential areas of morpho-syntactic weakness in young children with SSD who nonetheless have typical MLU, and (3) be aware of some potential longitudinal predictors of continued language difficulty in young children with SSD and poor MLU. (c) 2009 Elsevier Inc. All rights reserved.

  19. Narrative Ability of Children With Speech Sound Disorders and the Prediction of Later Literacy Skills

    Science.gov (United States)

    Wellman, Rachel L.; Lewis, Barbara A.; Freebairn, Lisa A.; Avrich, Allison A.; Hansen, Amy J.; Stein, Catherine M.

    2012-01-01

    Purpose The main purpose of this study was to examine how children with isolated speech sound disorders (SSDs; n = 20), children with combined SSDs and language impairment (LI; n = 20), and typically developing children (n = 20), ages 3;3 (years;months) to 6;6, differ in narrative ability. The second purpose was to determine if early narrative ability predicts school-age (8–12 years) literacy skills. Method This study employed a longitudinal cohort design. The children completed a narrative retelling task before their formal literacy instruction began. The narratives were analyzed and compared for group differences. Performance on these early narratives was then used to predict the children’s reading decoding, reading comprehension, and written language ability at school age. Results Significant group differences were found in children’s (a) ability to answer questions about the story, (b) use of story grammars, and (c) number of correct and irrelevant utterances. Regression analysis demonstrated that measures of story structure and accuracy were the best predictors of the decoding of real words, reading comprehension, and written language. Measures of syntax and lexical diversity were the best predictors of the decoding of nonsense words. Conclusion Combined SSDs and LI, and not isolated SSDs, impact a child’s narrative abilities. Narrative retelling is a useful task for predicting which children may be at risk for later literacy problems. PMID:21969531

  20. A Review of Physical and Perceptual Feature Extraction Techniques for Speech, Music and Environmental Sounds

    Directory of Open Access Journals (Sweden)

    Francesc Alías

    2016-05-01

    Full Text Available Endowing machines with sensing capabilities similar to those of humans is a prevalent quest in engineering and computer science. In the pursuit of making computers sense their surroundings, a huge effort has been conducted to allow machines and computers to acquire, process, analyze and understand their environment in a human-like way. Focusing on the sense of hearing, the ability of computers to sense their acoustic environment as humans do goes by the name of machine hearing. To achieve this ambitious aim, the representation of the audio signal is of paramount importance. In this paper, we present an up-to-date review of the most relevant audio feature extraction techniques developed to analyze the most usual audio signals: speech, music and environmental sounds. Besides revisiting classic approaches for completeness, we include the latest advances in the field based on new domains of analysis together with novel bio-inspired proposals. These approaches are described following a taxonomy that organizes them according to their physical or perceptual basis, being subsequently divided depending on the domain of computation (time, frequency, wavelet, image-based, cepstral, or other domains. The description of the approaches is accompanied with recent examples of their application to machine hearing related problems.

  1. Reading skills of students with speech sound disorders at three stages of literacy development.

    Science.gov (United States)

    Skebo, Crysten M; Lewis, Barbara A; Freebairn, Lisa A; Tag, Jessica; Avrich Ciesla, Allison; Stein, Catherine M

    2013-10-01

    The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with and without language impairment (LI) were compared to students without histories of SSD or LI (typical language; TL). In a cross-sectional design, students ages 7;0 (years;months) to 17;9 completed tests that measured reading, language, and nonlinguistic cognitive skills. For the TL group, phonological awareness predicted decoding at early elementary school, and overall language predicted reading comprehension at early elementary school and both decoding and reading comprehension at middle school and high school. For the SSD-only group, vocabulary predicted both decoding and reading comprehension at early elementary school, and overall language predicted both decoding and reading comprehension at middle school and decoding at high school. For the SSD and LI group, overall language predicted decoding at all 3 literacy stages and reading comprehension at early elementary school and middle school, and vocabulary predicted reading comprehension at high school. Although similar skills contribute to reading across the age span, the relative importance of these skills changes with children's literacy stages.

  2. The effect of age of acquisition, socioeducational status, and proficiency on the neural processing of second language speech sounds.

    Science.gov (United States)

    Archila-Suerte, Pilar; Zevin, Jason; Hernandez, Arturo E

    2015-02-01

    This study investigates the role of age of acquisition (AoA), socioeducational status (SES), and second language (L2) proficiency on the neural processing of L2 speech sounds. In a task of pre-attentive listening and passive viewing, Spanish-English bilinguals and a control group of English monolinguals listened to English syllables while watching a film of natural scenery. Eight regions of interest were selected from brain areas involved in speech perception and executive processes. The regions of interest were examined in 2 separate two-way ANOVA (AoA×SES; AoA×L2 proficiency). The results showed that AoA was the main variable affecting the neural response in L2 speech processing. Direct comparisons between AoA groups of equivalent SES and proficiency level enhanced the intensity and magnitude of the results. These results suggest that AoA, more than SES and proficiency level, determines which brain regions are recruited for the processing of second language speech sounds. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Atypical brainstem representation of onset and formant structure of speech sounds in children with language-based learning problems.

    Science.gov (United States)

    Wible, Brad; Nicol, Trent; Kraus, Nina

    2004-11-01

    This study investigated how the human auditory brainstem represents constituent elements of speech sounds differently in children with language-based learning problems (LP, n = 9) compared to normal children (NL, n = 11), especially under stress of rapid stimulation. Children were chosen for this study based on performance on measures of reading and spelling and measures of syllable discrimination. In response to the onset of the speech sound /da/, wave V-V(n) of the auditory brainstem response (ABR) had a significantly shallower slope in LP children, suggesting longer duration and/or smaller amplitude. The amplitude of the frequency following response (FFR) was diminished in LP subjects over the 229-686 Hz range, which corresponds to the first formant of the/da/ stimulus, while activity at 114 Hz, representing the fundamental frequency of /da/, was no different between groups. Normal indicators of auditory peripheral integrity suggest a central, neural origin of these differences. These data suggest that poor representation of crucial components of speech sounds could contribute to difficulties with higher-level language processes.

  4. Contributions of Morphological Awareness Skills to Word-Level Reading and Spelling in First-Grade Children with and without Speech Sound Disorder

    Science.gov (United States)

    Apel, Kenn; Lawrence, Jessika

    2011-01-01

    Purpose: In this study, the authors compared the morphological awareness abilities of children with speech sound disorder (SSD) and children with typical speech skills and examined how morphological awareness ability predicted word-level reading and spelling performance above other known contributors to literacy development. Method: Eighty-eight…

  5. Words in Puddles of Sound: Modelling Psycholinguistic Effects in Speech Segmentation

    Science.gov (United States)

    Monaghan, Padraic; Christiansen, Morten H.

    2010-01-01

    There are numerous models of how speech segmentation may proceed in infants acquiring their first language. We present a framework for considering the relative merits and limitations of these various approaches. We then present a model of speech segmentation that aims to reveal important sources of information for speech segmentation, and to…

  6. The role of the motor system in discriminating normal and degraded speech sounds.

    Science.gov (United States)

    D'Ausilio, Alessandro; Bufalari, Ilaria; Salmas, Paola; Fadiga, Luciano

    2012-07-01

    Listening to speech recruits a network of fronto-temporo-parietal cortical areas. Classical models consider anterior, motor, sites involved in speech production whereas posterior sites involved in comprehension. This functional segregation is more and more challenged by action-perception theories suggesting that brain circuits for speech articulation and speech perception are functionally interdependent. Recent studies report that speech listening elicits motor activities analogous to production. However, the motor system could be crucially recruited only under certain conditions that make speech discrimination hard. Here, by using event-related double-pulse transcranial magnetic stimulation (TMS) on lips and tongue motor areas, we show data suggesting that the motor system may play a role in noisy, but crucially not in noise-free environments, for the discrimination of speech signals. Copyright © 2011 Elsevier Srl. All rights reserved.

  7. Not all sounds sound the same: Parkinson's disease affects differently emotion processing in music and in speech prosody.

    Science.gov (United States)

    Lima, César F; Garrett, Carolina; Castro, São Luís

    2013-01-01

    Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.

  8. Cluster-Randomized Controlled Trial Evaluating the Effectiveness of Computer-Assisted Intervention Delivered by Educators for Children with Speech Sound Disorders

    Science.gov (United States)

    McLeod, Sharynne; Baker, Elise; McCormack, Jane; Wren, Yvonne; Roulstone, Sue; Crowe, Kathryn; Masso, Sarah; White, Paul; Howland, Charlotte

    2017-01-01

    Purpose: The aim was to evaluate the effectiveness of computer-assisted input-based intervention for children with speech sound disorders (SSD). Method: The Sound Start Study was a cluster-randomized controlled trial. Seventy-nine early childhood centers were invited to participate, 45 were recruited, and 1,205 parents and educators of 4- and…

  9. Modulation of residual currents in Rhode Island Sound by stratification and the spring-neap cycle

    Science.gov (United States)

    Wertman, C.; Ullman, D. S.; Kincaid, C.; Codiga, D. L.; Pfeiffer-Herbert, A.

    2016-02-01

    Circulation near estuarine-shelf interfaces controls important physical, chemical and biological exchange processes. A component of residual flow, or tidal rectification, can occur due to sloping bathymetry in these coastal areas due to the transfer of momentum from tidal frequencies to subtidal frequencies. Factors controlling rectification include available tidal kinetic energy and summer stratification. Many inner costal areas have regions where stratification balances tidal mixing creating a mixing front and strong residual flow. In addition to solar insolation and freshwater input, tidal mixing can modulate the position of this front. Rhode Island Sound (RIS) located south of Narragansett Bay and open to continental shelf waters is a convenient area to study different forcing of subtidal residual flow. We analyze data from moored Acoustic Doppler Current Profilers, chains of moored thermistors and conductivity-temperature-depth (CTD) instruments to study hydrography in this area from late 2009 to late 2011. Seasonal differences in the residual flow are observed with an intensification of a surface cyclonic flow around the periphery of RIS in the spring and summer, concurrent with an increase in stratification. Tidal kinetic energy is positively correlated with residual velocities at stations located in RIS. Along the periphery of RIS, residual velocities increase from neap tides to spring tides with the most significant velocity increase occurring in the top 25 % of the water. High amplitude (Spring) tides generate more vertical mixing at the near-shore stations than during neap cycles, corresponding to a significantly stronger RIS periphery current. We hypothesize that an increase in tidal kinetic energy over the spring-neap cycle changes both local hydrography and residual velocities through modification of tidal rectification and tidal mixing. Such changes in the periphery current will influence how, for example, nutrients and larvae from central RIS enter

  10. Reduced neural integration of letters and speech sounds in dyslexic children scales with individual differences in reading fluency.

    Directory of Open Access Journals (Sweden)

    Gojko Žarić

    Full Text Available The acquisition of letter-speech sound associations is one of the basic requirements for fluent reading acquisition and its failure may contribute to reading difficulties in developmental dyslexia. Here we investigated event-related potential (ERP measures of letter-speech sound integration in 9-year-old typical and dyslexic readers and specifically test their relation to individual differences in reading fluency. We employed an audiovisual oddball paradigm in typical readers (n = 20, dysfluent (n = 18 and severely dysfluent (n = 18 dyslexic children. In one auditory and two audiovisual conditions the Dutch spoken vowels/a/and/o/were presented as standard and deviant stimuli. In audiovisual blocks, the letter 'a' was presented either simultaneously (AV0, or 200 ms before (AV200 vowel sound onset. Across the three children groups, vowel deviancy in auditory blocks elicited comparable mismatch negativity (MMN and late negativity (LN responses. In typical readers, both audiovisual conditions (AV0 and AV200 led to enhanced MMN and LN amplitudes. In both dyslexic groups, the audiovisual LN effects were mildly reduced. Most interestingly, individual differences in reading fluency were correlated with MMN latency in the AV0 condition. A further analysis revealed that this effect was driven by a short-lived MMN effect encompassing only the N1 window in severely dysfluent dyslexics versus a longer MMN effect encompassing both the N1 and P2 windows in the other two groups. Our results confirm and extend previous findings in dyslexic children by demonstrating a deficient pattern of letter-speech sound integration depending on the level of reading dysfluency. These findings underscore the importance of considering individual differences across the entire spectrum of reading skills in addition to group differences between typical and dyslexic readers.

  11. Encoding of speech sounds at auditory brainstem level in good and poor hearing aid performers.

    Science.gov (United States)

    Shetty, Hemanth Narayan; Puttabasappa, Manjula

    Hearing aids are prescribed to alleviate loss of audibility. It has been reported that about 31% of hearing aid users reject their own hearing aid because of annoyance towards background noise. The reason for dissatisfaction can be located anywhere from the hearing aid microphone till the integrity of neurons along the auditory pathway. To measure spectra from the output of hearing aid at the ear canal level and frequency following response recorded at the auditory brainstem from individuals with hearing impairment. A total of sixty participants having moderate sensorineural hearing impairment with age range from 15 to 65 years were involved. Each participant was classified as either Good or Poor Hearing aid Performers based on acceptable noise level measure. Stimuli /da/ and /si/ were presented through loudspeaker at 65dB SPL. At the ear canal, the spectra were measured in the unaided and aided conditions. At auditory brainstem, frequency following response were recorded to the same stimuli from the participants. Spectrum measured in each condition at ear canal was same in good hearing aid performers and poor hearing aid performers. At brainstem level, better F 0 encoding; F 0 and F 1 energies were significantly higher in good hearing aid performers than in poor hearing aid performers. Though the hearing aid spectra were almost same between good hearing aid performers and poor hearing aid performers, subtle physiological variations exist at the auditory brainstem. The result of the present study suggests that neural encoding of speech sound at the brainstem level might be mediated distinctly in good hearing aid performers from that of poor hearing aid performers. Thus, it can be inferred that subtle physiological changes are evident at the auditory brainstem in a person who is willing to accept noise from those who are not willing to accept noise. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier

  12. Polysyllable productions in preschool children with speech sound disorders: Error categories and the Framework of Polysyllable Maturity.

    Science.gov (United States)

    Masso, Sarah; McLeod, Sharynne; Baker, Elise; McCormack, Jane

    2016-06-01

    Children with speech sound disorders (SSD) find polysyllables difficult; however, routine sampling and measurement of speech accuracy are insufficient to describe polysyllable accuracy and maturity. This study had two aims: (1) compare two speech production tasks and (2) describe polysyllable errors within the Framework of Polysyllable Maturity. Ninety-three preschool children with SSD from the Sound Start Study (4;0-5;5 years) completed the Polysyllable Preschool Test (POP) and the Diagnostic Evaluation of Articulation and Phonology (DEAP-Phonology). Vowel accuracy was significantly different between the POP and the DEAP-Phonology. Polysyllables were analysed using the seven Word-level Analysis of Polysyllables (WAP) error categories: (1) substitution of consonants or vowels (97.8% of children demonstrated common use), (2) deletion of syllables, consonants or vowels (65.6%), (3) distortion of consonants or vowels (0.0%), (4) addition of consonants or vowels (0.0%), (5) alteration of phonotactics (77.4%), (6) alteration of timing (63.4%) and (7) assimilation or alteration of sequence (0.0%). The Framework of Polysyllable Maturity described five levels of maturity based on children's errors. Polysyllable productions of preschool children with SSD can be analysed and categorised using the WAP and interpreted using the Framework of Polysyllable Maturity.

  13. On the Perception of Speech Sounds as Biologically Significant Signals1,2

    Science.gov (United States)

    Pisoni, David B.

    2012-01-01

    This paper reviews some of the major evidence and arguments currently available to support the view that human speech perception may require the use of specialized neural mechanisms for perceptual analysis. Experiments using synthetically produced speech signals with adults are briefly summarized and extensions of these results to infants and other organisms are reviewed with an emphasis towards detailing those aspects of speech perception that may require some need for specialized species-specific processors. Finally, some comments on the role of early experience in perceptual development are provided as an attempt to identify promising areas of new research in speech perception. PMID:399200

  14. Differential effects of visual-acoustic biofeedback intervention for residual speech errors

    Directory of Open Access Journals (Sweden)

    Tara Mcallister Byun

    2016-11-01

    Full Text Available Recent evidence suggests that the incorporation of visual biofeedback technologies may enhance response to treatment in individuals with residual speech errors. However, there is a need for controlled research systematically comparing biofeedback versus non-biofeedback intervention approaches. This study implemented a single-subject experimental design with a crossover component to investigate the relative efficacy of visual-acoustic biofeedback and traditional articulatory treatment for residual rhotic errors. Eleven child/adolescent participants received ten sessions of visual-acoustic biofeedback and ten sessions of traditional treatment, with the order of biofeedback and traditional phases counterbalanced across participants. Probe measures eliciting untreated rhotic words were administered in at least 3 sessions prior to the start of treatment (baseline, between the two treatment phases (midpoint, and after treatment ended (maintenance, as well as before and after each treatment session. Perceptual accuracy of rhotic production was assessed by outside listeners in a blinded, randomized fashion. Results were analyzed using a combination of visual inspection of treatment trajectories, individual effect sizes, and logistic mixed-effects regression. Effect sizes and visual inspection revealed that participants could be divided into categories of strong responders (n=4, mixed/moderate responders (n=3, and non-responders (n=4. Individual results did not reveal a reliable pattern of stronger performance in biofeedback versus traditional blocks, or vice versa. Moreover, biofeedback versus traditional treatment was not a significant predictor of accuracy in the logistic mixed-effects model examining all within-treatment word probes. However, the interaction between treatment condition and treatment order was significant: biofeedback was more effective than traditional treatment in the first phase of treatment, and traditional treatment was more

  15. Reconceptualizing Practice with Multilingual Children with Speech Sound Disorders: People, Practicalities and Policy

    Science.gov (United States)

    Verdon, Sarah; McLeod, Sharynne; Wong, Sandie

    2015-01-01

    Background: The speech and language therapy profession is required to provide services to increasingly multilingual caseloads. Much international research has focused on the challenges of speech and language therapists' (SLTs) practice with multilingual children. Aims: To draw on the experience and knowledge of experts in the field to: (1)…

  16. The Comorbidity between Attention-Deficit/Hyperactivity Disorder (ADHD) in Children and Arabic Speech Sound Disorder

    Science.gov (United States)

    Hariri, Ruaa Osama

    2016-01-01

    Children with Attention-Deficiency/Hyperactive Disorder (ADHD) often have co-existing learning disabilities and developmental weaknesses or delays in some areas including speech (Rief, 2005). Seeing that phonological disorders include articulation errors and other forms of speech disorders, studies pertaining to children with ADHD symptoms who…

  17. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging periodicity-tagged segregation of competing speech in rooms

    Directory of Open Access Journals (Sweden)

    Mark eSayles

    2015-01-01

    Full Text Available The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once, in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation. Brainstem circuits help segregate these complex acoustic mixtures into auditory objects. Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0 modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous.We examine the ability of 129 single units in the ventral cochlear nucleus of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels’ spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels’ spectral energy into two streams (corresponding to the two vowels, on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging

  18. A Change of a Consonant Status: the Bedouinisation of the [j] Sound in the Speech of Kuwaitis: A Case Study

    Directory of Open Access Journals (Sweden)

    Abdulmohsen A. Dashti

    2015-09-01

    Full Text Available In light of sociolinguist phonological change, the following study investigates the [j] sound in the speech of Kuwaitis as the predominant form and characterizes the sedentary population which is made up of both the indigenous and non-indigenous group; while [ʤ] is the realisation of the Bedouins who are also a part of the indigenous population. Although [ʤ] is the classical variant, it has, for some time, been regarded by Kuwaitis as the stigmatized form and the [j] as the one that carries prestige. This study examines the change of status of [j] and [ʤ] in the speech of Kuwaitis. The main hypothesis is that [j] no longer carries prestige. To test this hypothesis, 40 Kuwaitis of different gender, ages, educational background, and social networks were spontaneously chosen to be interviewed. Their speech was phonetically transcribed and accordingly was quantitatively and qualitatively analyzed. Results indicate that the [j] variant is undergoing change of status and that the social parameters and the significant political and social changes, that Kuwait has undergone recently, have triggered this linguistic shift.

  19. The DYX2 locus and neurochemical signaling genes contribute to speech sound disorder and related neurocognitive domains.

    Science.gov (United States)

    Eicher, J D; Stein, C M; Deng, F; Ciesla, A A; Powers, N R; Boada, R; Smith, S D; Pennington, B F; Iyengar, S K; Lewis, B A; Gruen, J R

    2015-04-01

    A major milestone of child development is the acquisition and use of speech and language. Communication disorders, including speech sound disorder (SSD), can impair a child's academic, social and behavioral development. Speech sound disorder is a complex, polygenic trait with a substantial genetic component. However, specific genes that contribute to SSD remain largely unknown. To identify associated genes, we assessed the association of the DYX2 dyslexia risk locus and markers in neurochemical signaling genes (e.g., nicotinic and dopaminergic) with SSD and related endophenotypes. We first performed separate primary associations in two independent samples - Cleveland SSD (210 affected and 257 unaffected individuals in 127 families) and Denver SSD (113 affected individuals and 106 unaffected individuals in 85 families) - and then combined results by meta-analysis. DYX2 markers, specifically those in the 3' untranslated region of DCDC2 (P = 1.43 × 10(-4) ), showed the strongest associations with phonological awareness. We also observed suggestive associations of dopaminergic-related genes ANKK1 (P = 1.02 × 10(-2) ) and DRD2 (P = 9.22 × 10(-3) ) and nicotinic-related genes CHRNA3 (P = 2.51 × 10(-3) ) and BDNF (P = 8.14 × 10(-3) ) with case-control status and articulation. Our results further implicate variation in putative regulatory regions in the DYX2 locus, particularly in DCDC2, influencing language and cognitive traits. The results also support previous studies implicating variation in dopaminergic and nicotinic neural signaling influencing human communication and cognitive development. Our findings expand the literature showing genetic factors (e.g., DYX2) contributing to multiple related, yet distinct neurocognitive domains (e.g., dyslexia, language impairment, and SSD). How these factors interactively yield different neurocognitive and language-related outcomes remains to be elucidated. © 2015 The Authors. Genes, Brain and Behavior published by

  20. Deficits in Letter-Speech Sound Associations but Intact Visual Conflict Processing in Dyslexia: Results from a Novel ERP-Paradigm.

    Science.gov (United States)

    Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina

    2017-01-01

    The reading and spelling deficits characteristic of developmental dyslexia (dyslexia) have been related to problems in phonological processing and in learning associations between letters and speech-sounds. Even when children with dyslexia have learned the letters and their corresponding speech sounds, letter-speech sound associations might still be less automatized compared to children with age-adequate literacy skills. In order to examine automaticity in letter-speech sound associations and to overcome some of the disadvantages associated with the frequently used visual-auditory oddball paradigm, we developed a novel electrophysiological letter-speech sound interference paradigm. This letter-speech sound interference paradigm was applied in a group of 9-year-old children with dyslexia ( n = 36) and a group of typically developing (TD) children of similar age ( n = 37). Participants had to indicate whether two letters look visually the same. In the incongruent condition (e.g., the letter pair A-a) there was a conflict between the visual information and the automatically activated phonological information; although the visual appearance of the two letters is different, they are both associated with the same speech sound. This conflict resulted in slower response times (RTs) in the incongruent than in the congruent (e.g., the letter pair A-e) condition. Furthermore, in the TD control group, the conflict resulted in fast and strong event-related potential (ERP) effects reflected in less negative N1 amplitudes and more positive conflict slow potentials (cSP) in the incongruent than in the congruent condition. However, the dyslexic group did not show any conflict-related ERP effects, implying that letter-speech sound associations are less automatized in this group. Furthermore, we examined general visual conflict processing in a control visual interference task, using false fonts. The conflict in this experiment was based purely on the visual similarity of the

  1. Quality of Mobile Phone and Tablet Mobile Apps for Speech Sound Disorders: Protocol for an Evidence-Based Appraisal.

    Science.gov (United States)

    Furlong, Lisa M; Morris, Meg E; Erickson, Shane; Serry, Tanya A

    2016-11-29

    Although mobile apps are readily available for speech sound disorders (SSD), their validity has not been systematically evaluated. This evidence-based appraisal will critically review and synthesize current evidence on available therapy apps for use by children with SSD. The main aims are to (1) identify the types of apps currently available for Android and iOS mobile phones and tablets, and (2) to critique their design features and content using a structured quality appraisal tool. This protocol paper presents and justifies the methods used for a systematic review of mobile apps that provide intervention for use by children with SSD. The primary outcomes of interest are (1) engagement, (2) functionality, (3) aesthetics, (4) information quality, (5) subjective quality, and (6) perceived impact. Quality will be assessed by 2 certified practicing speech-language pathologists using a structured quality appraisal tool. Two app stores will be searched from the 2 largest operating platforms, Android and iOS. Systematic methods of knowledge synthesis shall include searching the app stores using a defined procedure, data extraction, and quality analysis. This search strategy shall enable us to determine how many SSD apps are available for Android and for iOS compatible mobile phones and tablets. It shall also identify the regions of the world responsible for the apps' development, the content and the quality of offerings. Recommendations will be made for speech-language pathologists seeking to use mobile apps in their clinical practice. This protocol provides a structured process for locating apps and appraising the quality, as the basis for evaluating their use in speech pathology for children in English-speaking nations. ©Lisa M Furlong, Meg E Morris, Shane Erickson, Tanya A Serry. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 29.11.2016.

  2. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  3. Sound

    CERN Document Server

    2013-01-01

    Sound has the power to soothe, excite, warn, protect, and inform. Indeed, the transmission and reception of audio signals pervade our daily lives. Readers will examine the mechanics and properties of sound and provides an overview of the "interdisciplinary science called acoustics." Also covered are functions and diseases of the human ear.

  4. Sound, noise and speech at the 9000-seat Holy Trinity Church in Fatima, Portugal

    OpenAIRE

    António Pedro Oliveira de Carvalho; Pedro Miguel Aguiar da Silva

    2010-01-01

    This paper presents the interior acoustical characterization of the 9,000-seat church of the Holy Trinity in the Sanctuary of Fátima, Portugal, inaugurated in 2007. In situ measurements were held regarding interior sound pressure levels (with and without the HVAC equipment working), NC curves, RASTI (with and without the installed sound system) and reverberation time. The results are presented and commented according to the design values. A comparison is made with other churches in the world ...

  5. Sounds for Study: Speech and Language Therapy Students' Use and Perception of Exercise Podcasts for Phonetics

    Science.gov (United States)

    Knight, Rachael-Anne

    2010-01-01

    Currently little is known about how students use podcasts of exercise material (as opposed to lecture material), and whether they perceive such podcasts to be beneficial. This study aimed to assess how exercise podcasts for phonetics are used and perceived by second year speech and language therapy students. Eleven podcasts of graded phonetics…

  6. Reduced neural integration of letters and speech sounds links phonological and reading deficits in adult dyslexia

    NARCIS (Netherlands)

    Blau, Vera C; van Atteveldt, Nienke; Ekkebus, Michel; Goebel, Rainer; Blomert, Leo

    2009-01-01

    Developmental dyslexia is a specific reading and spelling deficit affecting 4% to 10% of the population. Advances in understanding its origin support a core deficit in phonological processing characterized by difficulties in segmenting spoken words into their minimally discernable speech segments

  7. Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies

    NARCIS (Netherlands)

    Fraga González, G.; Žarić, G.; Tijms, J.; Bonte, M.; van der Molen, M.W.

    We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS) associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for

  8. Validation of the U-STARR with the AB-York crescent of sound, a new instrument to evaluate speech intelligibility in noise and spatial hearing skills

    NARCIS (Netherlands)

    Smulders, Yvette E.; Rinia, Albert B.; Pourier, Vanessa E C; Van Zon, Alice; Van Zanten, Gijsbert A.; Stegeman, Inge; Scherf, Fanny W A C; Smit, Adriana L.; Topsakal, Vedat; Tange, Rinze A.; Grolman, Wilko

    2015-01-01

    The Advanced Bionics ® (AB)-York crescent of sound is a new test setup that comprises speech intelligibility in noise and localization tests that represent everyday listening situations. One of its tests is the Sentence Test with Adaptive Randomized Roving levels (STARR) with sentences and noise

  9. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    Science.gov (United States)

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  10. Deviant processing of letters and speech sounds as proximate cause of reading failure : a functional magnetic resonance imaging study of dyslexic children

    NARCIS (Netherlands)

    Blau, Vera C; Reithler, Joel; van Atteveldt, Nienke; Seitz, Jochen; Gerretsen, Patty; Goebel, Rainer; Blomert, Leo

    Learning to associate auditory information of speech sounds with visual information of letters is a first and critical step for becoming a skilled reader in alphabetic languages. Nevertheless, it remains largely unknown which brain areas subserve the learning and automation of such associations.

  11. "When He's around His Brothers ... He's Not so Quiet": The Private and Public Worlds of School-Aged Children with Speech Sound Disorder

    Science.gov (United States)

    McLeod, Sharynne; Daniel, Graham; Barr, Jacqueline

    2013-01-01

    Children interact with people in context: including home, school, and in the community. Understanding children's relationships within context is important for supporting children's development. Using child-friendly methodologies, the purpose of this research was to understand the lives of children with speech sound disorder (SSD) in context.…

  12. Diet and Gender Influences on Processing and Discrimination of Speech Sounds in 3- and 6-Month-Old Infants: A Developmental ERP Study

    Science.gov (United States)

    Pivik, R. T.; Andres, Aline; Badger, Thomas M.

    2011-01-01

    Early post-natal nutrition influences later development, but there are no studies comparing brain function in healthy infants as a function of dietary intake even though the major infant diets differ significantly in nutrient composition. We studied brain responses (event-related potentials; ERPs) to speech sounds for infants who were fed either…

  13. Relationship between speech-sound disorders and early literacy skills in preschool-age children: impact of comorbid language impairment.

    Science.gov (United States)

    Sices, Laura; Taylor, H Gerry; Freebairn, Lisa; Hansen, Amy; Lewis, Barbara

    2007-12-01

    Disorders of articulation or speech-sound disorders (SSD) are common in early childhood. Children with these disorders may be at risk for reading difficulties because they may have poor auditory, phonologic, and verbal memory skills. We sought to characterize the reading and writing readiness of preschool children with SSD and identify factors associated with preliteracy skills. Subjects were 125 children aged 3 to 6 years with moderate to severe SSD; 53% had comorbid language impairment (LI). Reading readiness was measured with the Test of Early Reading Ability-2 (TERA) and writing skills with the Test of Early Written Language-2 (TEWL), which assessed print concept knowledge. Linear regression was used to examine the association between SSD severity and TERA and TEWL scores and analysis of variance to examine the effect of comorbid LI. Performance on a battery of speech and language tests was reduced by way of factor analysis to composites for articulation, narrative, grammar, and word knowledge skills. Early reading and writing scores were significantly lower for children with comorbid LI but were not related to SSD severity once language status was taken into account. Composites for grammar and word knowledge were related to performance on the TERA and TEWL, even after adjusting for Performance IQ. Below average language skills in preschool place a child at risk for deficits in preliteracy skills, which may have implications for the later development of reading disability. Preschool children with SSD and LI may benefit from instruction in preliteracy skills in addition to language therapy.

  14. A multigenerational family study of oral and hand motor sequencing ability provides evidence for a familial speech sound disorder subtype

    Science.gov (United States)

    Peter, Beate; Raskind, Wendy H.

    2011-01-01

    Purpose To evaluate phenotypic expressions of speech sound disorder (SSD) in multigenerational families with evidence of familial forms of SSD. Method Members of five multigenerational families (N = 36) produced rapid sequences of monosyllables and disyllables and tapped computer keys with repetitive and alternating movements. Results Measures of repetitive and alternating motor speed were correlated within and between the two motor systems. Repetitive and alternating motor speeds increased in children and decreased in adults as a function of age. In two families with children who had severe speech deficits consistent with disrupted praxis, slowed alternating, but not repetitive, oral movements characterized most of the affected children and adults with a history of SSD, and slowed alternating hand movements were seen in some of the biologically related participants as well. Conclusion Results are consistent with a familial motor-based SSD subtype with incomplete penetrance, motivating new clinical questions about motor-based intervention not only in the oral but also the limb system. PMID:21909176

  15. The influence of linguistic experience on the cognitive processing of pitch in speech and nonspeech sounds.

    Science.gov (United States)

    Bent, Tessa; Bradlow, Ann R; Wright, Beverly A

    2006-02-01

    In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions. ((c) 2006 APA, all rights reserved).

  16. Sound

    CERN Document Server

    Rivera, Andrea

    2017-01-01

    Sound is all around us. Learn how it is used in art, technology, and engineering. Five easy-to-read chapters explain the science behind sound, as well as its real-world applications. Vibrant, full-color photos, bolded glossary words, and a key stats section let readers zoom in even deeper. Aligned to Common Core Standards and correlated to state standards. Abdo Zoom is a division of ABDO.

  17. AGGLOMERATIVE CLUSTERING OF SOUND RECORD SPEECH SEGMENTS BASED ON BAYESIAN INFORMATION CRITERION

    Directory of Open Access Journals (Sweden)

    O. Yu. Kydashev

    2013-01-01

    Full Text Available This paper presents the detailed description of agglomerative clustering system implementation for speech segments based on Bayesian information criterion. Numerical experiment results with different acoustic features, as well as the full and diagonal covariance matrices application are given. The error rate DER equal to 6.4% for audio records of radio «Svoboda» was achieved by means of designed system.

  18. Sound localization and speech identification in the frontal median plane with a hear-through headset

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming

    2014-01-01

    A hear-through headset is formed by mounting miniature microphones on small insert earphones. This type of ear-wear technology enables the user to hear the sound sources and acoustics of the surroundings as close to real life as possible, with the additional feature that computer-generated audio...... signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...

  19. Risk of weathered residual Exxon Valdez oil to pink salmon embryos in Prince William Sound.

    Science.gov (United States)

    Brannon, Ernest L; Collins, Keya M; Cronin, Mathew A; Moulton, Lawrence L; Parker, Keith R; Wilson, William

    2007-04-01

    It has been hypothesized that pink salmon eggs incubating in intertidal streams transecting Prince William Sound (PWS) beaches oiled by the Exxon Valdez oil spill were exposed to lethal doses of dissolved hydrocarbons. Since polycyclic aromatic hydrocarbon (PAH) levels in the incubation gravel were too low to cause mortality, the allegation is that dissolved high-molecular-weight hydrocarbons (HPAH) leaching from oil deposits on the beach adjacent to the streams were the source of toxicity. To evaluate this hypothesis, we placed pink salmon eggs in PWS beach sediments containing residual oil from the Exxon Valdez oil spill and in control areas without oil. We quantified the hydrocarbon concentrations in the eggs after three weeks of incubation. Tissue PAH concentrations of eggs in oiled sediments were generally < 100 ppb and similar to background levels on nonoiled beaches. Even eggs in direct contact with oil in the sediment resulted in tissue PAH loads well below the lethal threshold concentrations established in laboratory bioassays, and very low concentrations of HPAH compounds were present. These results indicate that petroleum hydrocarbons dissolved from oil deposits on intertidal beaches are not at concentrations that pose toxic risk to incubating pink salmon eggs. The evidence does not support the hypothesis that interstitial pore water in previously oiled beaches is highly toxic.

  20. An exploratory study of the influence of load and practice on segmental and articulatory variability in children with speech sound disorders.

    Science.gov (United States)

    Vuolo, Janet; Goffman, Lisa

    2017-01-01

    This exploratory treatment study used phonetic transcription and speech kinematics to examine changes in segmental and articulatory variability. Nine children, ages 4 to 8 years old, served as participants, including two with childhood apraxia of speech (CAS), five with speech sound disorder (SSD) and two who were typically developing. Children practised producing agent + action phrases in an imitation task (low linguistic load) and a retrieval task (high linguistic load) over five sessions. In the imitation task in session one, both participants with CAS showed high degrees of segmental and articulatory variability. After five sessions, imitation practice resulted in increased articulatory variability for five participants. Retrieval practice resulted in decreased articulatory variability in three participants with SSD. These results suggest that short-term speech production practice in rote imitation disrupts articulatory control in children with and without CAS. In contrast, tasks that require linguistic processing may scaffold learning for children with SSD but not CAS.

  1. Sound production treatment for acquired apraxia of speech: Effects of blocked and random practice on multisyllabic word production.

    Science.gov (United States)

    Wambaugh, Julie; Nessler, Christina; Wright, Sandra; Mauszycki, Shannon; DeLong, Catharine

    2016-10-01

    This study was designed to examine the effects of practice schedule, blocked vs random, on outcomes of a behavioural treatment for acquired apraxia of speech (AOS), Sound Production Treatment (SPT). SPT was administered to four speakers with chronic AOS and aphasia in the context of multiple baseline designs across behaviours and participants. Treatment was applied to multiple sound errors within three-to-five syllable words. All participants received both practice schedules: SPT-Random (SPT-R) and SPT-Blocked (SPT-B). Improvements in accuracy of word production for trained items were found for both treatment conditions for all participants. One participant demonstrated better maintenance effects associated with SPT-R. Response generalisation to untreated words varied across participants, but was generally modest and unstable. Stimulus generalisation to production of words in sentence completion was positive for three of the participants. Stimulus generalisation to production of phrases was positive for two of the participants. Findings provide additional efficacy data regarding SPT's effects on articulation of treated items and extend knowledge of the treatment's effects when applied to multiple targets within multisyllabic words.

  2. Deficits in auditory brainstem pathway encoding of speech sounds in children with learning problems.

    Science.gov (United States)

    King, Cynthia; Warrier, Catherine M; Hayes, Erin; Kraus, Nina

    2002-02-15

    Auditory brainstem responses were recorded in normal children (NL) and children clinically diagnosed with a learning problem (LP). These responses were recorded to both a click stimulus and the formant transition portion of a speech syllable /da/. While no latency differences between the NL and LP populations were seen in responses to the click stimuli, the syllable /da/ did elicit latency differences between these two groups. Deficits in cortical processing of signals in noise were seen for those LP subjects with delayed brainstem responses to the /da/, but not for LPs with normal brainstem measures. Preliminary findings indicate that training may be beneficial to LP subjects with brainstem processing delays.

  3. Child implant users' imitation of happy- and sad-sounding speech.

    Science.gov (United States)

    Wang, David J; Trehub, Sandra E; Volkova, Anna; van Lieshout, Pascal

    2013-01-01

    Cochlear implants have enabled many congenitally or prelingually deaf children to acquire their native language and communicate successfully on the basis of electrical rather than acoustic input. Nevertheless, degraded spectral input provided by the device reduces the ability to perceive emotion in speech. We compared the vocal imitations of 5- to 7-year-old deaf children who were highly successful bilateral implant users with those of a control sample of children who had normal hearing. First, the children imitated several happy and sad sentences produced by a child model. When adults in Experiment 1 rated the similarity of imitated to model utterances, ratings were significantly higher for the hearing children. Both hearing and deaf children produced poorer imitations of happy than sad utterances because of difficulty matching the greater pitch modulation of the happy versions. When adults in Experiment 2 rated electronically filtered versions of the utterances, which obscured the verbal content, ratings of happy and sad utterances were significantly differentiated for deaf as well as hearing children. The ratings of deaf children, however, were significantly less differentiated. Although deaf children's utterances exhibited culturally typical pitch modulation, their pitch modulation was reduced relative to that of hearing children. One practical implication is that therapeutic interventions for deaf children could expand their focus on suprasegmental aspects of speech perception and production, especially intonation patterns.

  4. Child implant users’ imitation of happy- and sad-sounding speech

    Directory of Open Access Journals (Sweden)

    David Jueyu Wang

    2013-06-01

    Full Text Available Cochlear implants have enabled many congenitally or prelingually deaf children to acquire their native language and communicate successfully on the basis of electrical rather than acoustic input. Nevertheless, degraded spectral input provided by the device reduces the ability to perceive emotion in speech. We compared the vocal imitations of 5- to 7-year-old deaf children who were highly successful bilateral implant users with those of a control sample of children who had normal hearing. First, the children imitated several happy and sad sentences produced by a child model. When adults in Experiment 1 rated the similarity of imitated to model utterances, ratings were significantly higher for the hearing children. Both hearing and deaf children produced poorer imitations of happy than sad utterances because of difficulty matching the greater pitch modulation of the happy versions. When adults in Experiment 2 rated electronically filtered versions of the utterances, which obscured the verbal content, ratings of happy and sad utterances were significantly differentiated for deaf as well as hearing children. The ratings of deaf children, however, were significantly less differentiated. Although deaf children’s utterances exhibited culturally typical pitch modulation, their pitch modulation was reduced relative to that of hearing children. One practical implication is that therapeutic interventions for deaf children could expand their focus on suprasegmental aspects of speech perception and production, especially intonation patterns.

  5. Sound of Silence: Comparison of ICT and speech deprivation among students

    Directory of Open Access Journals (Sweden)

    Tihana Brkljačić

    2017-12-01

    Full Text Available The aim of the study was twofold: to describe self-reported habits of ICT use in every-day life and to analyze feelings and behavior triggered by ICT and speech deprivation. The study was conducted on three randomly selected groups of students with different tasks: Without Speaking (W/S group (n=10 spent a day without talking to anyone; Without Technology (W/T group (n=13 spent a day without using any kind of ICT, while the third group was a control group (n=10 and had no restrictions. The participants’ task in all groups was to write a diary detailing their feelings, thoughts and behaviors related to their group’s conditions. Before the experiment, students reported their ICT related habits. Right after groups were assigned, they reported their task-related impressions. During the experiment, participants wrote diary records at three time-points. All participants used ICT on a daily basis, and most were online all the time. Dominant ICT activities were communication with friends and family, studying, followed by listening to music and watching films. Speech deprivation was a more difficult task compared to ICT deprivation, resulting in more drop-outs and more negative emotions. However, participants in W/S expected the task to be difficult, and some of them actually reported positive experiences, but for others it was a very difficult, lonesome and terrifying experience. About half of the students in W/T claimed that the task was more difficult than they had expected, and some of them realized that they are dysfunctional without technology, and probably addicted to it.

  6. HiResolution and conventional sound processing in the HiResolution bionic ear: using appropriate outcome measures to assess speech recognition ability.

    Science.gov (United States)

    Koch, Dawn Burton; Osberger, Mary Joe; Segel, Phil; Kessler, Dorcas

    2004-01-01

    This study compared speech perception benefits in adults implanted with the HiResolution (HiRes) Bionic Ear who used both conventional and HiRes sound processing. A battery of speech tests was used to determine which formats were most appropriate for documenting the wide range of benefit experienced by cochlear-implant users. A repeated-measures design was used to assess postimplantation speech perception in adults who received the HiResolution Bionic Ear in a recent clinical trial. Patients were fit first with conventional strategies and assessed after 3 months of use. Patients were then switched to HiRes sound processing and assessed again after 3 months of use. To assess the immediate effect of HiRes sound processing on speech perception performance, consonant recognition testing was performed in a subset of patients after 3 days of HiRes use and compared with their 3-month performance with conventional processing. Subjects were implanted and evaluated at 19 cochlear implant programs in the USA and Canada affiliated primarily with tertiary medical centers. Patients were 51 postlinguistically deafened adults. Speech perception was assessed using CNC monosyllabic words, CID sentences and HINT sentences in quiet and noise. Consonant recognition testing was also administered to a subset of patients (n = 30) using the Iowa Consonant Test presented in quiet and noise. All patients completed a strategy preference questionnaire after 6 months of device use. Consonant identification in quiet and noise improved significantly after only 3 days of HiRes use. The mean improvement from conventional to HiRes processing was significant on all speech perception tests. The largest differences occurred for the HINT sentences in noise. Ninety-six percent of the patients preferred HiRes to conventional sound processing. Ceiling effects occurred for both sentence tests in quiet. Although most patients improved after switching to HiRes sound processing, the greatest differences were

  7. Contributions of morphological awareness skills to word-level reading and spelling in first-grade children with and without speech sound disorder.

    Science.gov (United States)

    Apel, Kenn; Lawrence, Jessika

    2011-10-01

    In this study, the authors compared the morphological awareness abilities of children with speech sound disorder (SSD) and children with typical speech skills and examined how morphological awareness ability predicted word-level reading and spelling performance above other known contributors to literacy development. Eighty-eight first-grade students--44 students with SSD and no known history of language deficiencies, and 44 students with typical speech and language skills--completed an assessment battery designed to measure speech sound production, morphological awareness, phonemic awareness, letter-name knowledge, receptive vocabulary, word-level reading, and spelling abilities. The children with SSD scored significantly lower than did their counterparts on the morphological awareness measures as well as on phonemic awareness, word-level reading, and spelling tasks. Regression analyses suggested that morphological awareness predicted significant unique variance on the spelling measure for both groups and on the word-level reading measure for the children with typical skills. These results suggest that children with SSD may present with a general linguistic awareness insufficiency, which puts them at risk for difficulties with literacy and literacy-related tasks.

  8. Native and non-native speech sound processing and the neural mismatch responses: A longitudinal study on classroom-based foreign language learning.

    Science.gov (United States)

    Jost, Lea B; Eberhard-Moscicka, Aleksandra K; Pleisch, Georgette; Heusser, Veronica; Brandeis, Daniel; Zevin, Jason D; Maurer, Urs

    2015-06-01

    Learning a foreign language in a natural immersion context with high exposure to the new language has been shown to change the way speech sounds of that language are processed at the neural level. It remains unclear, however, to what extent this is also the case for classroom-based foreign language learning, particularly in children. To this end, we presented a mismatch negativity (MMN) experiment during EEG recordings as part of a longitudinal developmental study: 38 monolingual (Swiss-) German speaking children (7.5 years) were tested shortly before they started to learn English at school and followed up one year later. Moreover, 22 (Swiss-) German adults were recorded. Instead of the originally found positive mismatch response in children, an MMN emerged when applying a high-pass filter of 3 Hz. The overlap of a slow-wave positivity with the MMN indicates that two concurrent mismatch processes were elicited in children. The children's MMN in response to the non-native speech contrast was smaller compared to the native speech contrast irrespective of foreign language learning, suggesting that no additional neural resources were committed to processing the foreign language speech sound after one year of classroom-based learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. A Randomized Controlled Trial on The Beneficial Effects of Training Letter-Speech Sound Integration on Reading Fluency in Children with Dyslexia.

    Science.gov (United States)

    Fraga González, Gorka; Žarić, Gojko; Tijms, Jurgen; Bonte, Milene; Blomert, Leo; van der Molen, Maurits W

    2015-01-01

    A recent account of dyslexia assumes that a failure to develop automated letter-speech sound integration might be responsible for the observed lack of reading fluency. This study uses a pre-test-training-post-test design to evaluate the effects of a training program based on letter-speech sound associations with a special focus on gains in reading fluency. A sample of 44 children with dyslexia and 23 typical readers, aged 8 to 9, was recruited. Children with dyslexia were randomly allocated to either the training program group (n = 23) or a waiting-list control group (n = 21). The training intensively focused on letter-speech sound mapping and consisted of 34 individual sessions of 45 minutes over a five month period. The children with dyslexia showed substantial reading gains for the main word reading and spelling measures after training, improving at a faster rate than typical readers and waiting-list controls. The results are interpreted within the conceptual framework assuming a multisensory integration deficit as the most proximal cause of dysfluent reading in dyslexia. ISRCTN register ISRCTN12783279.

  10. Discrimination and streaming of speech sounds based on differences in interaural and spectral cues.

    Science.gov (United States)

    David, Marion; Lavandier, Mathieu; Grimault, Nicolas; Oxenham, Andrew J

    2017-09-01

    Differences in spatial cues, including interaural time differences (ITDs), interaural level differences (ILDs) and spectral cues, can lead to stream segregation of alternating noise bursts. It is unknown how effective such cues are for streaming sounds with realistic spectro-temporal variations. In particular, it is not known whether the high-frequency spectral cues associated with elevation remain sufficiently robust under such conditions. To answer these questions, sequences of consonant-vowel tokens were generated and filtered by non-individualized head-related transfer functions to simulate the cues associated with different positions in the horizontal and median planes. A discrimination task showed that listeners could discriminate changes in interaural cues both when the stimulus remained constant and when it varied between presentations. However, discrimination of changes in spectral cues was much poorer in the presence of stimulus variability. A streaming task, based on the detection of repeated syllables in the presence of interfering syllables, revealed that listeners can use both interaural and spectral cues to segregate alternating syllable sequences, despite the large spectro-temporal differences between stimuli. However, only the full complement of spatial cues (ILDs, ITDs, and spectral cues) resulted in obligatory streaming in a task that encouraged listeners to integrate the tokens into a single stream.

  11. Discrimination of speech sound contrasts determined with behavioral tests and event-related potentials in cochlear implant recipients

    NARCIS (Netherlands)

    Beynon, A.J.; Snik, A.F.; Stegeman, D.F.; Broek, P.

    2005-01-01

    Cortical potentials evoked with speech stimuli were investigated in ten experienced cochlear implant (CI, type Nucleus 24M) users using three different speech-coding strategies and two different speech contrasts, one vowel (/i/-/a/) and one consonant (/ba/-/da/) contrast. On average, results showed

  12. Children with speech sound disorder: Comparing a non-linguistic auditory approach with a phonological intervention approach to improve phonological skills

    Directory of Open Access Journals (Sweden)

    Cristina eMurphy

    2015-02-01

    Full Text Available This study aimed to compare the effects of a non-linguistic auditory intervention approach with a phonological intervention approach on the phonological skills of children with speech sound disorder. A total of 17 children, aged 7-12 years, with speech sound disorder were randomly allocated to either the non-linguistic auditory temporal intervention group (n = 10, average age 7.7 ± 1.2 or phonological intervention group (n = 7, average age 8.6 ± 1.2. The intervention outcomes included auditory-sensory measures (auditory temporal processing skills and cognitive measures (attention, short-term memory, speech production and phonological awareness skills. The auditory approach focused on non-linguistic auditory training (eg. backward masking and frequency discrimination, whereas the phonological approach focused on speech sound training (eg. phonological organisation and awareness. Both interventions consisted of twelve 45-minute sessions delivered twice per week, for a total of nine hours. Intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. No significant improvement on phonological skills was observed in any of the groups. Inter-group analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. Therefore, both analyses suggest that although the non-linguistic auditory intervention approach appeared to be the most effective intervention approach, it was not sufficient to promote the enhancement of phonological skills.

  13. A Comparison of Speech Sound Intervention Delivered by Telepractice and Side-by-Side Service Delivery Models

    Science.gov (United States)

    Grogan-Johnson, Sue; Schmidt, Anna Marie; Schenker, Jason; Alvares, Robin; Rowan, Lynne E.; Taylor, Jacquelyn

    2013-01-01

    Telepractice has the potential to provide greater access to speech-language intervention services for children with communication impairments. Substantiation of this delivery model is necessary for telepractice to become an accepted alternative delivery model. This study investigated the progress made by school-age children with speech sound…

  14. Residual neural processing of musical sound features in adult cochlear implant users

    DEFF Research Database (Denmark)

    Timm, Lydia; Vuust, Peter; Brattico, Evira

    2014-01-01

    Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological...... neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. HIGHLIGHTS: -Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically......: auditory evoked potentials; cochlear implant; mismatch negativity; music multi-feature paradigm; music perception...

  15. A study on nonlinear characteristics of speech sound with reference to some languages of North East region

    Science.gov (United States)

    Dutta, Rashmi

    INTRODUCTION : Speech science is, in fact, a sub-discipline of the Nonlinear Dynamical System [2,104 ]. There are two different types of Dynamical System. A Continuous Dynamical System may be defined for the continuous time case, by the equation: x = F (x), where x is a vector of length d, defining a point in a d- dimensional space, F is some function (linear or nonlinear) operating on x, and x is the time derivative of x. This system is deterministic, in that it is possible to completely specify its evolution or flow of trajectories in the d- dimensional space, given the initial starting conditions. A Discrete Dynamical System can be defined as a map [by the process of literations]: Xn+1 = G ( Xn ), where Xn is again a d- length vector at time step n, and G is an operator function. Given an initial state, X0, it is possible to calculate the value of xn for any n > 0. Speech has evolved as a primary form of communication between humans, i.e. speech and hearing are the man's most used means of communication [104, 114]. Analysis of human speech has been a goal of Research during the last few decades [105, 108]. With the rapid development of information technology (IT), the human-machine communication, using natural speech, has received wide attention from both academic and business communities. One highly quantitative approach of characterizing the communications potential of speech is in terms of information theory ideas as introduced by Shannon [C.E. Shannon, "A Mathematical Theory of Communication," Bell System Tech journal, Vol 27, pp623- 656, October, 1968]. According to information theory, speech can be represented in terms of its message content, or information. An alternative way of characterizing speech is in terms of the signal carrying the message information, i.e., the acoustic waveform. Although information theoretic ideas have played a major role in sophisticated communications systems, it is the speech representation based on the waveform, or some

  16. Wavelet-Based Speech Enhancement Using Time-Frequency Adaptation

    Science.gov (United States)

    Wang, Kun-Ching

    2003-12-01

    Wavelet denoising is commonly used for speech enhancement because of the simplicity of its implementation. However, the conventional methods generate the presence of musical residual noise while thresholding the background noise. The unvoiced components of speech are often eliminated from this method. In this paper, a novel algorithm of wavelet coefficient threshold (WCT) based on time-frequency adaptation is proposed. In addition, an unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. The wavelet coefficient threshold (WCT) of each subband is first temporally adjusted according to the value of a posterior signal-to-noise ratio (SNR). To prevent the degradation of unvoiced sounds during noise, the algorithm utilizes a simple speech/noise detector (SND) and further divides speech signal into unvoiced and voiced sounds. Then, we apply appropriate wavelet thresholding according to voiced/unvoiced (V/U) decision. Based on the masking properties of human auditory system, a perceptual gain factor is adopted into wavelet thresholding for suppressing musical residual noise. Simulation results show that the proposed method is capable of reducing noise with little speech degradation and the overall performance is superior to several competitive methods.

  17. Enhancement of brain event-related potentials to speech sounds is associated with compensated reading skills in dyslexic children with familial risk for dyslexia.

    Science.gov (United States)

    Lohvansuu, Kaisa; Hämäläinen, Jarmo A; Tanskanen, Annika; Ervast, Leena; Heikkinen, Elisa; Lyytinen, Heikki; Leppänen, Paavo H T

    2014-12-01

    Specific reading disability, dyslexia, is a prevalent and heritable disorder impairing reading acquisition characterized by a phonological deficit. However, the underlying mechanism of how the impaired phonological processing mediates resulting dyslexia or reading disabilities remains still unclear. Using ERPs we studied speech sound processing of 30 dyslexic children with familial risk for dyslexia, 51 typically reading children with familial risk for dyslexia, and 58 typically reading control children. We found enhanced brain responses to shortening of a phonemic length in pseudo-words (/at:a/ vs. /ata/) in dyslexic children with familial risk as compared to other groups. The enhanced brain responses were associated with better performance in behavioral phonemic length discrimination task, as well as with better reading and writing accuracy. Source analyses revealed that the brain responses of sub-group of dyslexic children with largest responses originated from a more posterior area of the right temporal cortex as compared to the responses of the other participants. This is the first electrophysiological evidence for a possible compensatory speech perception mechanism in dyslexia. The best readers within the dyslexic group have probably developed alternative strategies which employ compensatory mechanisms substituting their possible earlier deficit in phonological processing and might therefore be able to perform better in phonemic length discrimination and reading and writing accuracy tasks. However, we speculate that for reading fluency compensatory mechanisms are not that easily built and dyslexic children remain slow readers during their adult life. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. [The application of cybernetic modeling methods for the forensic medical personality identification based on the voice and sounding speech characteristics].

    Science.gov (United States)

    Kaganov, A Sh; Kir'yanov, P A

    2015-01-01

    The objective of the present publication was to discuss the possibility of application of cybernetic modeling methods to overcome the apparent discrepancy between two kinds of the speech records, viz. initial ones (e.g. obtained in the course of special investigation activities) and the voice prints obtained from the persons subjected to the criminalistic examination. The paper is based on the literature sources and the materials of original criminalistics expertises performed by the authors.

  19. The Fossilized Pronunciation of the /3:/ Sound in the Speech of Intermediate Tunisian English Students: Problem, Reasons and Suggested Solution

    Directory of Open Access Journals (Sweden)

    Dr. Chokri Smaoui

    2015-03-01

    Full Text Available Fossilization is a universal phenomenon that has attracted the attention of teachers and researchers alike. In this regard, the aim of this study is to investigate a supposedly fossilized feature in Tunisian learners’ performance, namely the pronunciation of the /3:/ sound among Intermediate Tunisian English Students (ITES. It tries to show whether ITES pronounce it correctly or whether it is rather often replaced by another phoneme. The study also tries to show the reasons behind fossilization. It is conjectured that L1 interference, lack of exposure to L2 input, and the absence of pronunciation teaching methods are the main factors behind this fossilized pronunciation. Finally, the study tries to apply the audio-articulation method to remedy for this type of fossilization. This method contains many drills that can help learners articulate better, and consequently produce more intelligible sounds.

  20. Development of computer program ENAUDIBL for computation of the sensation levels of multiple, complex, intrusive sounds in the presence of residual environmental masking noise

    Energy Technology Data Exchange (ETDEWEB)

    Liebich, R. E.; Chang, Y.-S.; Chun, K. C.

    2000-03-31

    The relative audibility of multiple sounds occurs in separate, independent channels (frequency bands) termed critical bands or equivalent rectangular (filter-response) bandwidths (ERBs) of frequency. The true nature of human hearing is a function of a complex combination of subjective factors, both auditory and nonauditory. Assessment of the probability of individual annoyance, community-complaint reaction levels, speech intelligibility, and the most cost-effective mitigation actions requires sensation-level data; these data are one of the most important auditory factors. However, sensation levels cannot be calculated by using single-number, A-weighted sound level values. This paper describes specific steps to compute sensation levels. A unique, newly developed procedure is used, which simplifies and improves the accuracy of such computations by the use of maximum sensation levels that occur, for each intrusive-sound spectrum, within each ERB. The newly developed program ENAUDIBL makes use of ERB sensation-level values generated with some computational subroutines developed for the formerly documented program SPECTRAN.

  1. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  2. Speech Quality Measurement

    Science.gov (United States)

    1978-05-01

    2.271 Sound Patterns of English, N. Chomsky and H. Halle, Haper zi Row, New York, 1968. 12.281 "Speech Synthesis by Rule," J. N. Holmes, I. G...L. H. Nakatani, B. J. McDermott, "Effect of Pitch and Formant Manipulations on Speech Quality," Bell Telephone Laboratories, Technical Memorandum, 72

  3. Transcription-based and acoustic analyses of rhotic vowels produced by children with and without speech sound disorders: further analyses from the Memphis Vowel Project.

    Science.gov (United States)

    Chung, Hyunju; Farr, Kathryn; Pollock, Karen E

    2014-05-01

    The acquisition of rhotic monophthongs (/ɝ/ and /ɚ/) and diphthongs (/ɪ͡ɚ, ɛ͡ɚ, ɔ͡ɚ and ɑ͡ɚ/) was examined in 3- and 4-year-old children with and without speech sound disorders (SSDs), using both transcription-based and acoustic analyses. African-American (AA) and European-American (EA) participants (n = 40) with and without SSD were selected from archival data collected as part of the Memphis Vowel Project. Dialect variation influenced rhotic vowels differently for EA and AA children, thus their data were reported separately. Transcription-based analyses showed wide variation in the accuracy of different rhotic vowels. The most frequent error pattern for children with SSD was Derhoticization to a Back Rounded Vowel (e.g. /ɝ/ → [ʊ]; /ɪ͡ɚ/ → [ɪ͡о]). Rhotic diphthong reduction errors were less frequent; however, Coalesence (/ɑ͡ɚ/ → [ɔ]) was often observed for /ɑ͡ɚ/. F2, F3 and F3-F2 spectral movement patterns revealed differences between productions transcribed as correct and incorrect.

  4. Effects of Blocked and Random Practice Schedule on Outcomes of Sound Production Treatment for Acquired Apraxia of Speech: Results of a Group Investigation.

    Science.gov (United States)

    Wambaugh, Julie L; Nessler, Christina; Wright, Sandra; Mauszycki, Shannon C; DeLong, Catharine; Berggren, Kiera; Bailey, Dallin J

    2017-06-22

    The purpose of this investigation was to compare the effects of schedule of practice (i.e., blocked vs. random) on outcomes of Sound Production Treatment (SPT; Wambaugh, Kalinyak-Fliszar, West, & Doyle, 1998) for speakers with chronic acquired apraxia of speech and aphasia. A combination of group and single-case experimental designs was used. Twenty participants each received SPT administered with randomized stimuli presentation (SPT-R) and SPT applied with blocked stimuli presentation (SPT-B). Treatment effects were examined with respect to accuracy of articulation as measured in treated and untreated experimental words produced during probes. All participants demonstrated improved articulation of treated items with both practice schedules. Effect sizes were calculated to estimate magnitude of change for treated and untreated items by treatment condition. No significant differences were found for SPT-R and SPT-B relative to effect size. Percent change over the highest baseline performance was also calculated to provide a clinically relevant indication of improvement. Change scores associated with SPT-R were significantly higher than those for SPT-B for treated items but not untreated items. SPT can result in improved articulation regardless of schedule of practice. However, SPT-R may result in greater gains for treated items. https://doi.org/10.23641/asha.5116831.

  5. Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies

    Directory of Open Access Journals (Sweden)

    Gorka Fraga González

    2017-01-01

    Full Text Available We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for establishing and consolidating L-SS associations. Then we review brain potential studies, including our own, that yielded two markers associated with reading fluency. Here we show that the marker related to visual specialization (N170 predicts word and pseudoword reading fluency in children who received additional practice in the processing of morphological word structure. Conversely, L-SS integration (indexed by mismatch negativity (MMN may only remain important when direct orthography to semantic conversion is not possible, such as in pseudoword reading. In addition, the correlation between these two markers supports the notion that multisensory integration facilitates visual specialization. Finally, we review the role of implicit learning and executive functions in audiovisual learning in dyslexia. Implications for remedial research are discussed and suggestions for future studies are presented.

  6. Sound Scene Database in Real Acoustical Environments, Proc. First International Workshop on East-Asian Language Resource and Evaluation

    OpenAIRE

    Satoshi Nakamura; Kazuo Hiyane; Futoshi Asano; Takashi Endo

    1998-01-01

    This paper describes a sound scene database for studies such as sound source localization, sound retrieval, sound recognition and speech recognition in real acoustical environments. Many speech databases have been released for speech recognition. However, only a few databases for non-speech sound in the real sound scene exist. It is clear that common databases for acoustical signal processing and sound recognition are necessary. Two approaches are taken to build the sound scene database in ou...

  7. Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters

    Directory of Open Access Journals (Sweden)

    Maria eMittag

    2013-12-01

    Full Text Available We used event-related brain potentials (ERPs to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the auditory phonological and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving

  8. Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters.

    Science.gov (United States)

    Mittag, Maria; Inauri, Karina; Huovilainen, Tatu; Leminen, Miika; Salo, Emma; Rinne, Teemu; Kujala, Teija; Alho, Kimmo

    2013-01-01

    We used event-related brain potentials (ERPs) to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological (AP) and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement (Pd) of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the AP and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity (RP) to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving left-ear syllables.

  9. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  10. Extensions to the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…

  11. Exposure of sea otters and harlequin ducks in Prince William Sound, Alaska, USA, to shoreline oil residues 20 years after the Exxon Valdez oil spill.

    Science.gov (United States)

    Neff, Jerry M; Page, David S; Boehm, Paul D

    2011-03-01

    We assessed whether sea otters and harlequin ducks in an area of western Prince William Sound, Alaska, USA (PWS), oiled by the 1989 Exxon Valdez oil spill (EVOS), are exposed to polycyclic aromatic hydrocarbons (PAH) from oil residues 20 years after the spill. Spilled oil has persisted in PWS for two decades as surface oil residues (SOR) and subsurface oil residues (SSOR) on the shore. The rare SOR are located primarily on the upper shore as inert, nonhazardous asphaltic deposits, and SSOR are confined to widely scattered locations as small patches under a boulder/cobble veneer, primarily on the middle and upper shore, in forms and locations that preclude physical contact by wildlife and diminish bioavailability. Sea otters and harlequin ducks consume benthic invertebrates that they collect by diving to the bottom in the intertidal and subtidal zones. Sea otters also dig intertidal and subtidal pits in search of clams. The three plausible exposure pathways are through the water, in oil-contaminated prey, or by direct contact with SSOR during foraging. Concentrations of PAH in near-shore water off oiled shores in 2002 to 2005 were at background levels (<0.05 ng/L). Median concentrations of PAH in five intertidal prey species on oiled shores in 2002 to 2008 range from 4.0 to 34 ng/g dry weight, indistinguishable from background concentrations. Subsurface oil residues are restricted to locations on the shore and substrate types, where large clams do not occur and where sea otters do not dig foraging pits. Therefore, that sea otters and harlequin ducks continue to be exposed to environmentally significant amounts of PAH from EVOS 20 years after the spill is not plausible. Copyright © 2010 SETAC.

  12. Vocal Noise Cancellation From Respiratory Sounds

    National Research Council Canada - National Science Library

    Moussavi, Zahra

    2001-01-01

    Although background noise cancellation for speech or electrocardiographic recording is well established, however when the background noise contains vocal noises and the main signal is a breath sound...

  13. Comparison of speech discrimination in noise and directional hearing with 2 different sound processors of a bone-anchored hearing system in adults with unilateral severe or profound sensorineural hearing loss.

    Science.gov (United States)

    Wesarg, Thomas; Aschendorff, Antje; Laszig, Roland; Beck, Rainer; Schild, Christian; Hassepass, Frederike; Kroeger, Stefanie; Hocke, Thomas; Arndt, Susan

    2013-08-01

    To evaluate and compare the benefit of a bone-anchored hearing implant with 2 different sound processors in adult patients with unilateral severe to profound sensorineural hearing loss (UHL). Prospective crossover design. Tertiary referral center. Eleven adults with UHL and normal hearing in the contralateral ear were assigned to 2 groups. All subjects were unilaterally implanted with a bone-anchored hearing implant and were initially fitted with 2 different sound processors (SP-1 and SP-2). SP-1 is a multichannel device equipped with an omnidirectional microphone and relatively simple digital signal-processing technology and provides a user-adjustable overall gain and tone control with compression limiting. SP-2 is a fully channel-by-channel programmable device, which can be set with nonlinear dynamic range compression or linear amplification. In addition, SP-2 features automatic noise management, an automatic multichannel directional microphone, microphone position compensation, and an implementation of prescription rules for different types of hearing losses, one of them unilateral deafness. After at least 1-month use of the initial processor, both groups were fitted with the alternative processor. Speech discrimination in noise and localization tests were performed at baseline visit before surgery, after at least 1-month use of the initial processor, and after at least 2-week use of the alternative processor. Relative to unaided baseline, SP-2 enabled significantly better overall speech discrimination results, whereas there was no overall improvement with SP-1. There was no difference in speech discrimination between SP-1 and SP-2 in all spatial settings. Sound localization was comparably poor at baseline and with both processors but significantly better than chance level for all 3 conditions. Patients with UHL have an overall objective benefit for speech discrimination in noise using a bone-anchored hearing implant with SP-2. In contrast, there is no overall

  14. Speech perception as categorization.

    Science.gov (United States)

    Holt, Lori L; Lotto, Andrew J

    2010-07-01

    Speech perception (SP) most commonly refers to the perceptual mapping from the highly variable acoustic speech signal to a linguistic representation, whether it be phonemes, diphones, syllables, or words. This is an example of categorization, in that potentially discriminable speech sounds are assigned to functionally equivalent classes. In this tutorial, we present some of the main challenges to our understanding of the categorization of speech sounds and the conceptualization of SP that has resulted from these challenges. We focus here on issues and experiments that define open research questions relevant to phoneme categorization, arguing that SP is best understood as perceptual categorization, a position that places SP in direct contact with research from other areas of perception and cognition.

  15. Cortical activity patterns predict robust speech discrimination ability in noise

    Science.gov (United States)

    Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.

    2012-01-01

    The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331

  16. The Beginnings of Danish Speech Perception

    DEFF Research Database (Denmark)

    Østerbye, Torkil

    reductions of speech sounds evident in the pronunciation of the language. This book (originally a PhD thesis) consists of three studies based on the results of two experiments. The experiments were designed to provide knowledge of the perception of Danish speech sounds by Danish adults and infants......Little is known about the perception of speech sounds by native Danish listeners. However, the Danish sound system differs in several interesting ways from the sound systems of other languages. For instance, Danish is characterized, among other features, by a rich vowel inventory and by different......, in the light of the rich and complex Danish sound system. The first two studies report on native adults’ perception of Danish speech sounds in quiet and noise. The third study examined the development of language-specific perception in native Danish infants at 6, 9 and 12 months of age. The book points...

  17. INVERSE FILTERING TECHNIQUES IN SPEECH ANALYSIS

    African Journals Online (AJOL)

    Dr Obe

    features in the speech process: (i) the resonant structure of the vocal-tract transfer function, i.e, formant analysis,. (ii) the glottal wave,. (iii) the fundamental frequency or pitch of the sound. During the production of speech, the configuration of the articulators: the vocal tract tongue, teeth, lips, etc, changes from one sound to.

  18. Source Assignment and Feature Extraction in Speech

    Science.gov (United States)

    Ades, Anthony E.

    1977-01-01

    Three experiments investigated the relationship in speech perception between the mechanisms that determine the source of speech sounds and those that analyze their actual acoustic contents and extract from them the acoustic cues to a sound's phonetic description. (Author/RK)

  19. Tipos de erros de fala em crianças com transtorno fonológico em função do histórico de otite média Speech errors in children with speech sound disorders according to otitis media history

    Directory of Open Access Journals (Sweden)

    Haydée Fiszbein Wertzner

    2012-12-01

    Full Text Available OBJETIVO: Descrever os índices articulatórios quanto aos diferentes tipos de erros e verificar a existência de um tipo de erro preferencial em crianças com transtorno fonológico, em função da presença ou não de histórico de otite média. MÉTODOS: Participaram deste estudo prospectivo e transversal, 21 sujeitos com idade entre 5 anos e 2 meses e 7 anos e 9 meses com diagnóstico de transtorno fonológico. Os sujeitos foram agrupados de acordo com a presença do histórico otite média. O grupo experimental 1 (GE1 foi composto por 14 sujeitos com histórico de otite média e o grupo experimental 2 (GE2 por sete sujeitos que não apresentaram histórico de otite média. Foram calculadas a quantidade de erros de fala (distorções, omissões e substituições e os índices articulatórios. Os dados foram submetidos à análise estatística. RESULTADOS: Os grupos GE1 e GE2 diferiram quanto ao desempenho nos índices na comparação entre as duas provas de fonologia aplicadas. Observou-se em todas as análises que os índices que avaliam as substituições indicaram o tipo de erro mais cometido pelas crianças com transtorno fonológico. CONCLUSÃO: Os índices foram efetivos na indicação da substituição como o erro mais ocorrente em crianças com TF. A maior ocorrência de erros de fala observada na nomeação de figuras em crianças com histórico de otite média indica que tais erros, possivelmente, estão associados à dificuldade na representação fonológica causada pela perda auditiva transitória que vivenciaram.PURPOSE: To describe articulatory indexes for the different speech errors and to verify the existence of a preferred type of error in children with speech sound disorder, according to the presence or absence of otitis media history. METHODS: Participants in this prospective and cross-sectional study were 21 subjects aged between 5 years and 2 months and 7 years and 9 months with speech sound disorder. Subjects were

  20. A efetividade dos testes complementares no acompanhamento da intervenção terapêutica no transtorno fonológico Effectiveness of complementary tests in monitoring therapeutic intervention in speech sound disorders

    Directory of Open Access Journals (Sweden)

    Haydée Fiszbein Wertzner

    2012-12-01

    Full Text Available O planejamento e a evolução terapêutica de crianças com transtorno fonológico estão diretamente relacionados à avaliação inicial e aos testes complementares aplicados. Acompanhar a evolução do caso por meio de verificações regulares acrescenta informações importantes à avaliação diagnóstica, o que permite fortalecer achados iniciais a respeito da dificuldade subjacente identificada na avaliação inicial. Assim, no presente estudo de caso verificou-se a efetividade e a eficiência da aplicação do índice de porcentagem de consoantes corretas revisado (PCC-R bem como dos testes complementares de inconsistência de fala, de estimulabilidade e de habilidades metafonológicas no acompanhamento da intervenção terapêutica em crianças com transtorno fonológico. Participaram deste estudo três crianças do gênero masculino. Na data da avaliação inicial o Caso 1 tinha 6 anos e 9 meses de idade, o Caso 2, 8 anos e 10 meses, e o Caso 3, 9 anos e 7 meses. Além da avaliação específica da fonologia, foram aplicados testes complementares que auxiliaram na verificação da dificuldade subjacente específica em cada um dos casos. Desta forma, os sujeitos foram submetidos à avaliação de habilidades metafonológicas, à prova de inconsistência de fala e de estimulabilidade. A análise conjunta dos dados permitiu constatar que os testes selecionados foram efetivos e eficientes tanto para complementar o diagnóstico como para indicar mudanças nos três casos de crianças com transtorno fonológico.Therapeutic planning and evolution of children with speech sound disorders are related to both the initial assessment and to the complementary tests selected to be applied. Monitoring the case by regular evaluations adds important information to the diagnosis, which allows strengthening the initial findings with regards to the underlying deficits identified in the initial evaluation. The aim of this case report was to verify the

  1. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  2. What can a string of letters possibly mean when it is not a word? Speech sounds as one answer to the search for meaning in Finnegans Wake.

    Science.gov (United States)

    Whissell, Cynthia

    2015-02-01

    In his last novel, Finnegans Wake, James Joyce attempted to redirect readers' search for meaning away from traditional paths by using many non-words (combinations, foreign words, neologisms, and onomatopoeic words). Words and non-words in the novel were analyzed in terms of the emotional meanings of their constituent sounds using the model developed by Whissell where motor responses involved in enunciating sounds are associated with their emotional meaning. Significant sound-emotion differences were identified among and within chapters. "Smiling" pleasant long e (as in "tee") was used at higher rates in successive chapters and "sighing" passive AO (as in "Shaun") was used at especially high rates in Chapters 8 ("Anna Livia Plurabelle") and 12 ("Mamalujo"). Sound emotionality is one of the alternative paths to meaning in the novel.

  3. A Quantitative Ecological Risk Assessment of the Toxicological Risks from Exxon Valdez Subsurface Oil Residues to Sea Otters at Northern Knight Island, Prince William Sound, Alaska.

    Science.gov (United States)

    Harwell, Mark A; Gentile, John H; Johnson, Charles B; Garshelis, David L; Parker, Keith R

    2010-07-01

    A comprehensive, quantitative risk assessment is presented of the toxicological risks from buried Exxon Valdez subsurface oil residues (SSOR) to a subpopulation of sea otters (Enhydra lutris) at Northern Knight Island (NKI) in Prince William Sound, Alaska, as it has been asserted that this subpopulation of sea otters may be experiencing adverse effects from the SSOR. The central questions in this study are: could the risk to NKI sea otters from exposure to polycyclic aromatic hydrocarbons (PAHs) in SSOR, as characterized in 2001-2003, result in individual health effects, and, if so, could that exposure cause subpopulation-level effects? We follow the U.S. Environmental Protection Agency (USEPA) risk paradigm by: (a) identifying potential routes of exposure to PAHs from SSOR; (b) developing a quantitative simulation model of exposures using the best available scientific information; (c) developing scenarios based on calculated probabilities of sea otter exposures to SSOR; (d) simulating exposures for 500,000 modeled sea otters and extracting the 99.9% quantile most highly exposed individuals; and (e) comparing projected exposures to chronic toxicity reference values. Results indicate that, even under conservative assumptions in the model, maximum-exposed sea otters would not receive a dose of PAHs sufficient to cause any health effects; consequently, no plausible toxicological risk exists from SSOR to the sea otter subpopulation at NKI.

  4. Sensorimotor Interactions in Speech Learning

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    2011-10-01

    Full Text Available Auditory input is essential for normal speech development and plays a key role in speech production throughout the life span. In traditional models, auditory input plays two critical roles: 1 establishing the acoustic correlates of speech sounds that serve, in part, as the targets of speech production, and 2 as a source of feedback about a talker's own speech outcomes. This talk will focus on both of these roles, describing a series of studies that examine the capacity of children and adults to adapt to real-time manipulations of auditory feedback during speech production. In one study, we examined sensory and motor adaptation to a manipulation of auditory feedback during production of the fricative “s”. In contrast to prior accounts, adaptive changes were observed not only in speech motor output but also in subjects' perception of the sound. In a second study, speech adaptation was examined following a period of auditory–perceptual training targeting the perception of vowels. The perceptual training was found to systematically improve subjects' motor adaptation response to altered auditory feedback during speech production. The results of both studies support the idea that perceptual and motor processes are tightly coupled in speech production learning, and that the degree and nature of this coupling may change with development.

  5. A cross-language study of the speech sounds in Yorùbá and Malay: Implications for Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Boluwaji Oshodi

    2013-07-01

    Full Text Available Acquiring a language begins with the knowledge of its sounds system which falls under the branch of linguistics known as phonetics. The knowledge of the sound system becomes very important to prospective learners particularly L2 learners whose L1 exhibits different sounds and features from the target L2 because this knowledge is vital in order to internalise the correct pronunciation of words. This study examined and contrasted the sound systems of Yorùbá a Niger-Congo language spoken in Nigeria to that of Malay (Peninsular variety, an Austronesian language spoken in Malaysia with emphasis on the areas of differences. The data for this study were collected from ten participants; five native female Malay speakers who are married to Yorùbá native speakers but live in Malaysia and five Yorùbá native speakers who reside in Nigeria. The findings revealed that speakers from both sides have difficulties with sounds and features in the L2 which are not attested in their L1 and they tended to substitute them for similar ones in their L1 through transfer. This confirms the fact that asymmetry between the sound systems of L1 and L2 is a major source of error in L2 acquisition.

  6. Effects of Familiarity and Feeding on Newborn Speech-Voice Recognition

    Science.gov (United States)

    Valiante, A. Grace; Barr, Ronald G.; Zelazo, Philip R.; Brant, Rollin; Young, Simon N.

    2013-01-01

    Newborn infants preferentially orient to familiar over unfamiliar speech sounds. They are also better at remembering unfamiliar speech sounds for short periods of time if learning and retention occur after a feed than before. It is unknown whether short-term memory for speech is enhanced when the sound is familiar (versus unfamiliar) and, if so,…

  7. My Speech Problem, Your Listening Problem, and My Frustration: The Experience of Living with Childhood Speech Impairment

    Science.gov (United States)

    McCormack, Jane; McLeod, Sharynne; McAllister, Lindy; Harrison, Linda J.

    2010-01-01

    Purpose: The purpose of this article was to understand the experience of speech impairment (speech sound disorders) in everyday life as described by children with speech impairment and their communication partners. Method: Interviews were undertaken with 13 preschool children with speech impairment (mild to severe) and 21 significant others…

  8. Which clinical signs are valid indicators for speech language disorder?

    NARCIS (Netherlands)

    Dr. Margreet R. Luinge; Margot I. Visser-Bochane; Dr. C.P. van der Schans; Sijmen A. Reijneveld; W.P. Krijnen

    2016-01-01

    Speech language disorders, which include speech sound disorders and language disorders, are common in early childhood. These problems, and in particular language problems, frequently go under diagnosed, because current screening instruments have no satisfying psychometric properties. Recent research

  9. Evaluation of speech reception threshold in noise in young Cochlear™ Nucleus® system 6 implant recipients using two different digital remote microphone technologies and a speech enhancement sound processing algorithm.

    Science.gov (United States)

    Razza, Sergio; Zaccone, Monica; Meli, Aannalisa; Cristofari, Eliana

    2017-12-01

    Children affected by hearing loss can experience difficulties in challenging and noisy environments even when deafness is corrected by Cochlear implant (CI) devices. These patients have a selective attention deficit in multiple listening conditions. At present, the most effective ways to improve the performance of speech recognition in noise consists of providing CI processors with noise reduction algorithms and of providing patients with bilateral CIs. The aim of this study was to compare speech performances in noise, across increasing noise levels, in CI recipients using two kinds of wireless remote-microphone radio systems that use digital radio frequency transmission: the Roger Inspiro accessory and the Cochlear Wireless Mini Microphone accessory. Eleven Nucleus Cochlear CP910 CI young user subjects were studied. The signal/noise ratio, at a speech reception threshold (SRT) value of 50%, was measured in different conditions for each patient: with CI only, with the Roger or with the MiniMic accessory. The effect of the application of the SNR-noise reduction algorithm in each of these conditions was also assessed. The tests were performed with the subject positioned in front of the main speaker, at a distance of 2.5 m. Another two speakers were positioned at 3.50 m. The main speaker at 65 dB issued disyllabic words. Babble noise signal was delivered through the other speakers, with variable intensity. The use of both wireless remote microphones improved the SRT results. Both systems improved gain of speech performances. The gain was higher with the Mini Mic system (SRT = -4.76) than the Roger system (SRT = -3.01). The addition of the NR algorithm did not statistically further improve the results. There is significant improvement in speech recognition results with both wireless digital remote microphone accessories, in particular with the Mini Mic system when used with the CP910 processor. The use of a remote microphone accessory surpasses the benefit of

  10. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  11. Sound Hole Sound

    OpenAIRE

    Politzer, David

    2015-01-01

    The volume of air that goes in and out of a musical instrument's sound hole is related to the sound hole's contribution to the volume of the sound. Helmholtz's result for the simplest case of steady flow through an elliptical hole is reviewed. Measurements on multiple holes in sound box geometries and scales relevant to real musical instruments demonstrate the importance of a variety of effects. Electric capacitance of single flat plates is a mathematically identical problem, offering an alte...

  12. Speech and the right hemisphere.

    Science.gov (United States)

    Critchley, E M

    1991-01-01

    Two facts are well recognized: the location of the speech centre with respect to handedness and early brain damage, and the involvement of the right hemisphere in certain cognitive functions including verbal humour, metaphor interpretation, spatial reasoning and abstract concepts. The importance of the right hemisphere in speech is suggested by pathological studies, blood flow parameters and analysis of learning strategies. An insult to the right hemisphere following left hemisphere damage can affect residual language abilities and may activate non-propositional inner speech. The prosody of speech comprehension even more so than of speech production-identifying the voice, its affective components, gestural interpretation and monitoring one's own speech-may be an essentially right hemisphere task. Errors of a visuospatial type may occur in the learning process. Ease of learning by actors and when learning foreign languages is achieved by marrying speech with gesture and intonation, thereby adopting a right hemisphere strategy.

  13. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  14. Rehabilitation of Oronasal Speech Disorders

    Directory of Open Access Journals (Sweden)

    Hashem Shemshadi

    2006-09-01

    Full Text Available Oronasal region, as an important organ of taste and smell, being respected for its impact on the resonace, which is crucial for any normal speech production. Different congenital, acquired and/or developmentalpdefect, may not only have impacts on the quality of respiration, phonation, resonance, also on the process of a normal speech. This article will enable readers to have more focus in such important neuroanatomical speech zones disorders and their respective proper rehabilitation methods in different derangements. Among all other defects, oronasal malfunctionings would definitely has an influence on the oronasal sound resonance and furtherly render impairments on a normal speech production. Rehabilitative approach by speech and language pathologist is highly recommended to alleviate most of oronasal speech disorders.

  15. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  16. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...... the performance of personal sound-zone systems....

  17. Speech and the Right Hemisphere

    Directory of Open Access Journals (Sweden)

    E. M. R. Critchley

    1991-01-01

    Full Text Available Two facts are well recognized: the location of the speech centre with respect to handedness and early brain damage, and the involvement of the right hemisphere in certain cognitive functions including verbal humour, metaphor interpretation, spatial reasoning and abstract concepts. The importance of the right hemisphere in speech is suggested by pathological studies, blood flow parameters and analysis of learning strategies. An insult to the right hemisphere following left hemisphere damage can affect residual language abilities and may activate non-propositional inner speech. The prosody of speech comprehension even more so than of speech production—identifying the voice, its affective components, gestural interpretation and monitoring one's own speech—may be an essentially right hemisphere task. Errors of a visuospatial type may occur in the learning process. Ease of learning by actors and when learning foreign languages is achieved by marrying speech with gesture and intonation, thereby adopting a right hemisphere strategy.

  18. Fatores causais e aplicação de provas complementares relacionadas à gravidade no transtorno fonológico Causal factors and application of complementary tests in speech sound disorders

    Directory of Open Access Journals (Sweden)

    Haydée Fiszbein Wertzner

    2012-01-01

    Full Text Available OBJETIVO: Verificar se o índice de gravidade, que mede a porcentagem de consoantes corretas, distingue as crianças com transtorno fonológico em relação às medidas de estimulabilidade e inconsistência de fala, bem como à presença dos históricos familial e de otite média. MÉTODOS: Participaram da pesquisa 15 sujeitos com idades entre 5 e 7 anos e 11meses, com diagnóstico de transtorno fonológico. O índice Porcentagem de Consoantes Corretas - Revisado (PCC-R foi calculado para as provas de imitação de palavras e de nomeação de figuras, separadamente. A partir destas provas também foi computada a necessidade de aplicação da prova de estimulabilidade, de acordo com os critérios propostos em pesquisas anteriores. A prova de inconsistência de fala permitiu classificar os sujeitos como consistentes ou inconsistentes. Os dados foram submetidos à análise estatística. RESULTADOS: Na comparação entre os valores do PCC-R medido na prova de nomeação e de imitação foi observada diferença em relação à necessidade da aplicação da estimulabilidade. Em relação à prova de inconsistência de fala, não houve evidência desta relação. Não foi verificada diferença no PCC-R considerando-se a presença dos históricos de otite média e familial. CONCLUSÃO: O estudo indica que as crianças que precisaram da aplicação da prova de estimulabilidade apresentaram valores mais baixos de PCC-R. Entretanto, em relação à prova de inconsistência de fala e aos históricos de otite ou familial, o PCC-R não determinou diferenças entre as crianças.PURPOSE: To determine whether the severity index that measures the percentage of consonants correct distinguishes children with speech sound disorders (SSD according to measures of stimulability and speech inconsistency, as well as to the presence of heritability (familial history and history of early recurrent otitis media. METHODS: Participants were 15 subjects aged between 5 and 7

  19. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  20. Hearing speech in music

    Directory of Open Access Journals (Sweden)

    Seth-Reino Ekström

    2011-01-01

    Full Text Available The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA noise and speech spectrum-filtered noise (SPN]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA. The results showed a significant effect of piano performance speed and octave (P<.01. Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01 and SPN (P<.05. Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01, but there were smaller differences between masking conditions (P<.01. It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  1. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  2. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  3. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  4. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  5. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic...

  6. Anatomy and Physiology of the Speech Mechanism.

    Science.gov (United States)

    Sheets, Boyd V.

    This monograph on the anatomical and physiological aspects of the speech mechanism stresses the importance of a general understanding of the process of verbal communication. Contents include "Positions of the Body,""Basic Concepts Linked with the Speech Mechanism,""The Nervous System,""The Respiratory System--Sound-Power Source,""The…

  7. Speech masking and cancelling and voice obscuration

    Science.gov (United States)

    Holzrichter, John F.

    2013-09-10

    A non-acoustic sensor is used to measure a user's speech and then broadcasts an obscuring acoustic signal diminishing the user's vocal acoustic output intensity and/or distorting the voice sounds making them unintelligible to persons nearby. The non-acoustic sensor is positioned proximate or contacting a user's neck or head skin tissue for sensing speech production information.

  8. A Comparison of Two Theories of Speech/Language Behavior.

    Science.gov (United States)

    McQuillen, Jeffrey S.; Quigley, Tracy A.

    Two theories of speech appear to parallel each other closely, though one (E. Nuttall) is concerned mainly with speech from a functional perspective, and the other (F. Williams and R. Naremore) presents a developmental hierarchy of language form and function. Nuttall suggests there are two main origins of speech: sounds of discomfort (cries,…

  9. The Tuning of Human Neonates' Preference for Speech

    Science.gov (United States)

    Vouloumanos, Athena; Hauser, Marc D.; Werker, Janet F.; Martin, Alia

    2010-01-01

    Human neonates prefer listening to speech compared to many nonspeech sounds, suggesting that humans are born with a bias for speech. However, neonates' preference may derive from properties of speech that are not unique but instead are shared with the vocalizations of other species. To test this, thirty neonates and sixteen 3-month-olds were…

  10. The Influence of Bilingualism on Speech Production: A Systematic Review

    Science.gov (United States)

    Hambly, Helen; Wren, Yvonne; McLeod, Sharynne; Roulstone, Sue

    2013-01-01

    Background: Children who are bilingual and have speech sound disorder are likely to be under-referred, possibly due to confusion about typical speech acquisition in bilingual children. Aims: To investigate what is known about the impact of bilingualism on children's acquisition of speech in English to facilitate the identification and treatment of…

  11. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    NARCIS (Netherlands)

    Kriengwatana, B.; Escudero, P.; Kerkhoven, A.H.; ten Cate, C.

    2015-01-01

    Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still

  12. When speaker identity is unavoidable : Neural processing of speaker identity cues in natural speech

    NARCIS (Netherlands)

    Tuninetti, A.; Chládková, K.; Peter, V.; Schiller, N.O.; Escudero, P.

    2017-01-01

    Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word.

  13. Results of the Sensory Profile in Children with Suspected Childhood Apraxia of Speech

    Science.gov (United States)

    Newmeyer Amy J.; Grether, Sandra; Aylward, Christa; deGrauw, Ton; Akers, Rachel; Grasha, Carol; Ishikawa, Keiko; White, Jaye

    2009-01-01

    Speech-sound disorders are common in preschool-age children, and are characterized by difficulty in the planning and production of speech sounds and their combination into words and sentences. The objective of this study was to review and compare the results of the "Sensory Profile" ([Dunn, 1999]) in children with a specific type of speech-sound…

  14. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  15. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Directory of Open Access Journals (Sweden)

    Sid-Ahmed Selouani

    2009-01-01

    Full Text Available Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  16. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  17. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects are...... applications of sound design such as advertisement or soundtracks for movies....

  18. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    automatic recognition of speech (ASR). Instead, likely for historical reasons, envelopes of power spectrum were adopted as main carrier of linguistic information in ASR. However, the relationships between phonetic values of sounds and their short-term spectral envelopes are not straightforward. Consequently, this asks for ...

  19. Foreign subtitles help but native-language subtitles harm foreign speech perception

    NARCIS (Netherlands)

    Mitterer, H.A.; McQueen, J.M.

    2009-01-01

    Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether

  20. Perception of environmental sounds by experienced cochlear implant patients

    Science.gov (United States)

    Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan

    2011-01-01

    Objectives Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli, may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Design Seventeen experienced postlingually-deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception, and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern and temporal order for tones tests) and a backward digit recall test. Results The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants and r = 0.48 for vowels. HINT and

  1. Speech and Hearing Science in Ancient India--A Review of Sanskrit Literature.

    Science.gov (United States)

    Savithri, S. R.

    1988-01-01

    The study reviewed Sanskrit books written between 1500 BC and 1904 AD concerning diseases, speech pathology, and audiology. Details are provided of the ancient Indian system of disease classification, the classification of speech sounds, causes of speech disorders, and treatment of speech and language disorders. (DB)

  2. Speech training alters consonant and vowel responses in multiple auditory cortex fields.

    Science.gov (United States)

    Engineer, Crystal T; Rahebi, Kimiya C; Buell, Elizabeth P; Fink, Melyssa K; Kilgard, Michael P

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. The Multisensory Sound Lab: Sounds You Can See and Feel.

    Science.gov (United States)

    Lederman, Norman; Hendricks, Paula

    1994-01-01

    A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…

  4. Phonetic matching of auditory and visual speech develops during childhood : Evidence from sine-wave speech

    NARCIS (Netherlands)

    Baart, M.; Bortfeld, H.; Vroomen, J.

    2015-01-01

    The correspondence between auditory speech and lip-read information can be detected based on a combination of temporal and phonetic cross-modal cues. Here, we determined the point in developmental time at which children start to effectively use phonetic information to match a speech sound with one

  5. Environmental Sound Training in Cochlear Implant Users

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Kuvadia, Sejal; Gygi, Brian

    2015-01-01

    Purpose: The study investigated the effect of a short computer-based environmental sound training regimen on the perception of environmental sounds and speech in experienced cochlear implant (CI) patients. Method: Fourteen CI patients with the average of 5 years of CI experience participated. The protocol consisted of 2 pretests, 1 week apart,…

  6. Measuring the 'complexity' of sound

    Indian Academy of Sciences (India)

    Music, speech and environment noise are processed in areas that are anatomically distinct [2]. However, the reasons for this kind of functional organization are not clearly identified. We study the spectral dynamics of different environmental sounds and develop indices to quan- tify rate of change of spectral dynamics.

  7. Neural Entrainment to Speech Modulates Speech Intelligibility

    OpenAIRE

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and acoustic speech signal, listening task, and speech intelligibility have been observed repeatedly. However, a methodological bottleneck has prevented so far clarifying whether speech-brain entrainme...

  8. Speech Development

    Science.gov (United States)

    ... are placed in the mouth, much like an orthodontic retainer. The two most common types are 1) the speech bulb and 2) the palatal lift. The speech bulb is designed to partially close off the space between the soft palate and the throat. The palatal lift appliance serves to lift the soft palate to a ...

  9. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  10. Evaluation of PAH depletion of subsurface Exxon Valdez oil residues remaining in Prince William Sound in 2007-2008 and their likely bioremediation potential

    Energy Technology Data Exchange (ETDEWEB)

    Atlas, R. [Louisville Univ., Louisville, KY (United States); Bragg, J.R. [Creative Petroleum Solutions LLC, Houston, TX (United States)

    2009-07-01

    This study examined the extent of oil weathering at the Exxon Valdez oil spill (EVOS) sites and estimated the bioremediation potential for shoreline segments by examining the depletion of total polycyclic aromatic hydrocarbons (PAHs) relative to an estimated applicability threshold of 70 per cent. The distribution of oil was examined by location and current ratios of nitrogen and non-polar oil in order to assess if biodegradation rates were nutrient-limited. The impact of sequestration on the effectiveness of bioremediation was also studied. Results of the study showed that the EVOS residues are patchy and infrequently found on sites that were heavily oiled in 1989. Only 0.4 per cent of the oil originally stranded in 1989 remained. The remaining EVOS residues are sequestered under boulder and cobble armour in areas with limited contact with flowing water. The study also showed that concentrations of nitrogen and dissolved oxygen in pore waters within strata adjacent to the sequestered oil can support biodegradation. Most remaining EVOS residues are highly weathered and biodegraded. It was concluded that nutrients added to the shorelines are unlikely to effectively contact the sequestered oil. 31 refs., 2 tabs., 14 figs.

  11. Recent advances in nonlinear speech processing

    CERN Document Server

    Faundez-Zanuy, Marcos; Esposito, Antonietta; Cordasco, Gennaro; Drugman, Thomas; Solé-Casals, Jordi; Morabito, Francesco

    2016-01-01

    This book presents recent advances in nonlinear speech processing beyond nonlinear techniques. It shows that it exploits heuristic and psychological models of human interaction in order to succeed in the implementations of socially believable VUIs and applications for human health and psychological support. The book takes into account the multifunctional role of speech and what is “outside of the box” (see Björn Schuller’s foreword). To this aim, the book is organized in 6 sections, each collecting a small number of short chapters reporting advances “inside” and “outside” themes related to nonlinear speech research. The themes emphasize theoretical and practical issues for modelling socially believable speech interfaces, ranging from efforts to capture the nature of sound changes in linguistic contexts and the timing nature of speech; labors to identify and detect speech features that help in the diagnosis of psychological and neuronal disease, attempts to improve the effectiveness and performa...

  12. Primary progressive aphasia and apraxia of speech.

    Science.gov (United States)

    Jung, Youngsin; Duffy, Joseph R; Josephs, Keith A

    2013-09-01

    Primary progressive aphasia is a neurodegenerative syndrome characterized by progressive language dysfunction. The majority of primary progressive aphasia cases can be classified into three subtypes: nonfluent/agrammatic, semantic, and logopenic variants. Each variant presents with unique clinical features, and is associated with distinctive underlying pathology and neuroimaging findings. Unlike primary progressive aphasia, apraxia of speech is a disorder that involves inaccurate production of sounds secondary to impaired planning or programming of speech movements. Primary progressive apraxia of speech is a neurodegenerative form of apraxia of speech, and it should be distinguished from primary progressive aphasia given its discrete clinicopathological presentation. Recently, there have been substantial advances in our understanding of these speech and language disorders. The clinical, neuroimaging, and histopathological features of primary progressive aphasia and apraxia of speech are reviewed in this article. The distinctions among these disorders for accurate diagnosis are increasingly important from a prognostic and therapeutic standpoint. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. Training the Brain to Weight Speech Cues Differently: A Study of Finnish Second-language Users of English

    Science.gov (United States)

    Ylinen, Sari; Uther, Maria; Latvala, Antti; Vepsalainen, Sara; Iverson, Paul; Akahane-Yamada, Reiko; Naatanen, Risto

    2010-01-01

    Foreign-language learning is a prime example of a task that entails perceptual learning. The correct comprehension of foreign-language speech requires the correct recognition of speech sounds. The most difficult speech-sound contrasts for foreign-language learners often are the ones that have multiple phonetic cues, especially if the cues are…

  14. THE ONTOGENESIS OF SPEECH DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. E. Braudo

    2017-01-01

    Full Text Available The purpose of this article is to acquaint the specialists, working with children having developmental disorders, with age-related norms for speech development. Many well-known linguists and psychologists studied speech ontogenesis (logogenesis. Speech is a higher mental function, which integrates many functional systems. Speech development in infants during the first months after birth is ensured by the innate hearing and emerging ability to fix the gaze on the face of an adult. Innate emotional reactions are also being developed during this period, turning into nonverbal forms of communication. At about 6 months a baby starts to pronounce some syllables; at 7–9 months – repeats various sounds combinations, pronounced by adults. At 10–11 months a baby begins to react on the words, referred to him/her. The first words usually appear at an age of 1 year; this is the start of the stage of active speech development. At this time it is acceptable, if a child confuses or rearranges sounds, distorts or misses them. By the age of 1.5 years a child begins to understand abstract explanations of adults. Significant vocabulary enlargement occurs between 2 and 3 years; grammatical structures of the language are being formed during this period (a child starts to use phrases and sentences. Preschool age (3–7 y. o. is characterized by incorrect, but steadily improving pronunciation of sounds and phonemic perception. The vocabulary increases; abstract speech and retelling are being formed. Children over 7 y. o. continue to improve grammar, writing and reading skills. The described stages may not have strict age boundaries, as soon as they are dependent not only on environment, but also on the child’s mental constitution, heredity and character.

  15. Using therapeutic sound with progressive audiologic tinnitus management.

    Science.gov (United States)

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  16. Breath sounds

    Science.gov (United States)

    ... described as moist, dry, fine, and coarse. Rhonchi. Sounds that resemble snoring. They occur when air is blocked or air flow becomes rough through the large airways. Stridor. Wheeze-like sound heard when a person breathes. Usually it is ...

  17. Residual signal auto-correlation to evaluate speech in Parkinson’s disease patients Auto-correlação do sinal residual para avaliação da fala em pacientes com doença de Parkinson

    Directory of Open Access Journals (Sweden)

    José Carlos Pereira

    2006-12-01

    Full Text Available OBJECTIVE: To evaluate the maximum residual signal auto-correlation also known as pitch amplitude (PA values in patients with Parkinson’s disease (PD patients. METHOD: The signals of 21 Parkinson’s patients were compared with 15 healthy individuals, divided according age and gender. RESULTS: Statistical difference was seen between groups for PA, 0.39 for controls and 0.25 for PD. Normal value threshold was set as 0.3; (pOBJETIVO: Avaliar autocorrelação do sinal residual também denominado como amplitude do pitch (PA em pacientes com doença de Parkinson (PD. MÉTODO: Os valores de PA, estratificados de acordo com idade e sexo, em 21 pacientes com doença de Parkinson foram analisados e comparados aos dados obtidos em 15 indivíduos sadios. RESULTADOS: Foi determinada diferença estatística para a PA entre os dois grupos (p0,3. Nos pacientes com PD 80,77% dos pacientes tinham a PA <0,3, enquanto que entre os controles somente 12,28% apresentavam valores abaixo de 0,3. O diagrama de dispersão para idade e sexo para os doentes com PD mostraram um p=0,001 e r=0,54. Não houve diferença em relação a sexo e idade entre os grupos. CONCLUSÃO: A significativa diferença da PA entre pacientes com PD e controles demonstra a especificidade da análise. Os resultados apontam para a necessidade de estudos controlados, prospectivos, para implementar o uso e indicações da determinação da amplitude do pitch na avaliação da fala em pacientes com doença de Parkinson.

  18. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  19. Bridging the Gap Between Speech and Language: Using Multimodal Treatment in a Child With Apraxia.

    Science.gov (United States)

    Tierney, Cheryl D; Pitterle, Kathleen; Kurtz, Marie; Nakhla, Mark; Todorow, Carlyn

    2016-09-01

    Childhood apraxia of speech is a neurologic speech sound disorder in which children have difficulty constructing words and sounds due to poor motor planning and coordination of the articulators required for speech sound production. We report the case of a 3-year-old boy strongly suspected to have childhood apraxia of speech at 18 months of age who used multimodal communication to facilitate language development throughout his work with a speech language pathologist. In 18 months of an intensive structured program, he exhibited atypical rapid improvement, progressing from having no intelligible speech to achieving age-appropriate articulation. We suspect that early introduction of sign language by family proved to be a highly effective form of language development, that when coupled with intensive oro-motor and speech sound therapy, resulted in rapid resolution of symptoms. Copyright © 2016 by the American Academy of Pediatrics.

  20. Encoding of natural sounds at multiple spectral and temporal resolutions in the human auditory cortex

    NARCIS (Netherlands)

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Goebel, R.; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain

  1. Getting the Cocktail Party Started: Masking Effects in Speech Perception.

    Science.gov (United States)

    Evans, Samuel; McGettigan, Carolyn; Agnew, Zarinah K; Rosen, Stuart; Scott, Sophie K

    2016-03-01

    Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous fMRI, while they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioral task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment; activity was found within right lateralized frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.

  2. Fluid Sounds

    DEFF Research Database (Denmark)

    and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...... of sound, site and the social, and how the spatial, the visual, and the bodily interact in sonic environments, how they are constructed and how they are entangled in other practices. With the Seismograf special issue Fluid Sounds, we bring this knowledge into the dissemination of audio research itself...

  3. Neural entrainment to speech modulates speech intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Başkent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  4. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  5. The development of speech production in children with cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Chapman, Kathy

    2012-01-01

    The purpose of this chapter is to provide an overview of speech development of children with cleft palate +/- cleft lip. The chapter will begin with a discussion of the impact of clefting on speech. Next, we will provide a brief description of those factors impacting speech development...... for this population of children. Finally, research examining various aspects of speech development of infants and young children with cleft palate (birth to age five) will be reviewed. This final section will be organized by typical stages of speech sound development (e.g., prespeech, the early word stage...

  6. The development of speech production in children with cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Chapman, Kathy

    2012-01-01

    for this population of children. Finally, research examining various aspects of speech development of infants and young children with cleft palate (birth to age five) will be reviewed. This final section will be organized by typical stages of speech sound development (e.g., prespeech, the early word stage......The purpose of this chapter is to provide an overview of speech development of children with cleft palate +/- cleft lip. The chapter will begin with a discussion of the impact of clefting on speech. Next, we will provide a brief description of those factors impacting speech development......, and systematic phonology) and will include a summary of typical characteristics for each stage....

  7. Sounds in context

    DEFF Research Database (Denmark)

    Weed, Ethan

    A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... auditory processing of emotional speech was modulated by an accompanying visual context. I then discuss these results in terms of their implication for how we conceive of the auditory processing stream.......A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... time-course of contextual influence on auditory processing in three different paradigms: a simple mismatch negativity paradigm with tones of differing pitch, a multi-feature mismatch negativity paradigm in which tones were embedded in a complex musical context, and a cross-modal paradigm, in which...

  8. Brief periods of auditory perceptual training can determine the sensory targets of speech motor learning.

    Science.gov (United States)

    Lametti, Daniel R; Krol, Sonia A; Shiller, Douglas M; Ostry, David J

    2014-07-01

    The perception of speech is notably malleable in adults, yet alterations in perception seem to have little impact on speech production. However, we hypothesized that speech perceptual training might immediately influence speech motor learning. To test this, we paired a speech perceptual-training task with a speech motor-learning task. Subjects performed a series of perceptual tests designed to measure and then manipulate the perceptual distinction between the words head and had. Subjects then produced head with the sound of the vowel altered in real time so that they heard themselves through headphones producing a word that sounded more like had. In support of our hypothesis, the amount of motor learning in response to the voice alterations depended on the perceptual boundary acquired through perceptual training. The studies show that plasticity in adults' speech perception can have immediate consequences for speech production in the context of speech learning. © The Author(s) 2014.

  9. Speech recognition using articulatory and excitation source features

    CERN Document Server

    Rao, K Sreenivasa

    2017-01-01

    This book discusses the contribution of articulatory and excitation source information in discriminating sound units. The authors focus on excitation source component of speech -- and the dynamics of various articulators during speech production -- for enhancement of speech recognition (SR) performance. Speech recognition is analyzed for read, extempore, and conversation modes of speech. Five groups of articulatory features (AFs) are explored for speech recognition, in addition to conventional spectral features. Each chapter provides the motivation for exploring the specific feature for SR task, discusses the methods to extract those features, and finally suggests appropriate models to capture the sound unit specific knowledge from the proposed features. The authors close by discussing various combinations of spectral, articulatory and source features, and the desired models to enhance the performance of SR systems.

  10. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  11. Apraxia of Speech

    Science.gov (United States)

    ... here Home » Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... additional information about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  12. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2010-01-01

    In a sample of 46 children aged 4 to 7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants’ speech, prosody, and voice were compared with data from 40 typically-developing children, 13 preschool children with Speech Delay, and 15 participants aged 5 to 49 years with CAS in neurogenetic disorders. Speech Delay and Speech Errors, respectively, were modestly and substantially more prevalent in participants with ASD than reported population estimates. Double dissociations in speech, prosody, and voice impairments in ASD were interpreted as consistent with a speech attunement framework, rather than with the motor speech impairments that define CAS. Key Words: apraxia, dyspraxia, motor speech disorder, speech sound disorder PMID:20972615

  13. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  14. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  15. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  16. Group delay functions and its applications in speech technology

    Indian Academy of Sciences (India)

    Fourier transform phase; group delay functions; feature extraction from phase; feature switching; mutual information; K-L divergence. 1. Introduction. Speech is the output of a quasistationary process, since the characteristics of speech change con- tinuously with time. As the ear perceives frequencies to understand sound, ...

  17. Indian accent text-to-speech system for web browsing

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    3.1 Background. Text-to-speech (TSS) conversion has to be performed in two steps: (a) text to phoneme con- version and (b) phoneme to speech conversion. In the second step, we .... forms with memory constraints, such as DOS. .... for noise-like sounds, (e) five amplitudes for various bands of energy in the noise spectrum,.

  18. Auditory feedback perturbation in children with developmental speech disorders

    NARCIS (Netherlands)

    Terband, H.R.; van Brenk, F.J.; van Doornik-van der Zee, J.C.

    2014-01-01

    Background/purpose: Several studies indicate a close relation between auditory and speech motor functions in children with speech sound disorders (SSD). The aim of this study was to investigate the ability to compensate and adapt for perturbed auditory feedback in children with SSD compared to

  19. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    Science.gov (United States)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  20. Perceptual centres in speech - an acoustic analysis

    Science.gov (United States)

    Scott, Sophie Kerttu

    Perceptual centres, or P-centres, represent the perceptual moments of occurrence of acoustic signals - the 'beat' of a sound. P-centres underlie the perception and production of rhythm in perceptually regular speech sequences. P-centres have been modelled both in speech and non speech (music) domains. The three aims of this thesis were toatest out current P-centre models to determine which best accounted for the experimental data bto identify a candidate parameter to map P-centres onto (a local approach) as opposed to the previous global models which rely upon the whole signal to determine the P-centre the final aim was to develop a model of P-centre location which could be applied to speech and non speech signals. The first aim was investigated by a series of experiments in which a) speech from different speakers was investigated to determine whether different models could account for variation between speakers b) whether rendering the amplitude time plot of a speech signal affects the P-centre of the signal c) whether increasing the amplitude at the offset of a speech signal alters P-centres in the production and perception of speech. The second aim was carried out by a) manipulating the rise time of different speech signals to determine whether the P-centre was affected, and whether the type of speech sound ramped affected the P-centre shift b) manipulating the rise time and decay time of a synthetic vowel to determine whether the onset alteration was had more affect on P-centre than the offset manipulation c) and whether the duration of a vowel affected the P-centre, if other attributes (amplitude, spectral contents) were held constant. The third aim - modelling P-centres - was based on these results. The Frequency dependent Amplitude Increase Model of P-centre location (FAIM) was developed using a modelling protocol, the APU GammaTone Filterbank and the speech from different speakers. The P-centres of the stimuli corpus were highly predicted by attributes of

  1. Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…

  2. Sixteen-Month-Old Infants' Segment Words from Infant- and Adult-Directed Speech

    Science.gov (United States)

    Mani, Nivedita; Pätzold, Wiebke

    2016-01-01

    One of the first challenges facing the young language learner is the task of segmenting words from a natural language speech stream, without prior knowledge of how these words sound. Studies with younger children find that children find it easier to segment words from fluent speech when the words are presented in infant-directed speech, i.e., the…

  3. Pitch features of environmental sounds

    Science.gov (United States)

    Yang, Ming; Kang, Jian

    2016-07-01

    A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.

  4. Speech Enhancement

    DEFF Research Database (Denmark)

    Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    and their performance bounded and assessed in terms of noise reduction and speech distortion. The book shows how various filter designs can be obtained in this framework, including the maximum SNR, Wiener, LCMV, and MVDR filters, and how these can be applied in various contexts, like in single-channel and multichannel...

  5. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  6. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  7. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  8. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  9. Beyond Words: How Humans Communicate Through Sound.

    Science.gov (United States)

    Kraus, Nina; Slater, Jessica

    2016-01-01

    Every day we communicate using complex linguistic and musical systems, yet these modern systems are the product of a much more ancient relationship with sound. When we speak, we communicate not only with the words we choose, but also with the patterns of sound we create and the movements that create them. From the natural rhythms of speech, to the precise timing characteristics of a consonant, these patterns guide our daily communication. By examining the principles of information processing that are common to speech and music, we peel back the layers to reveal the biological foundations of human communication through sound. Further, we consider how the brain's response to sound is shaped by experience, such as musical expertise, and implications for the treatment of communication disorders.

  10. Sound knowledge

    DEFF Research Database (Denmark)

    Kauffmann, Lene Teglhus

    of the research is to investigate what is considered to ‘work as evidence’ in health promotion and how the ‘evidence discourse’ influences social practices in policymaking and in research. From investigating knowledge practices in the field of health promotion, I develop the concept of sound knowledge...... making, which I call ‘sound knowledge’. Sound knowledge is an approach to knowledge that takes the reflexive considerations of actors in policymaking processes as well as in research about what knowledge is into account. Seeing knowledge as sound makes connections between different ideas, concepts...... and ideologies explicit. Furthermore, in relation to an anthropology of knowledge, sound knowledge also offers a reconsideration of the way anthropologists study knowledge, as it specifies that studying knowledge for anthropologists means studying what people consider as knowledge, in what circumstances...

  11. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance...... models of complex modulation processing in the human auditory system....

  12. Processing of spatial sounds in the impaired auditory system

    DEFF Research Database (Denmark)

    Arweiler, Iris

    of two such cues on speech intelligibility was studied. First, the benefit from early reflections (ER’s) in a room was determined using a virtual auditory environment. ER’s were found to be useful for speech intelligibility, but to a smaller extent than the direct sound (DS). The benefit was quantified...... implications for speech perception models and the development of compensation strategies in future generations of hearing instruments....

  13. 78 FR 49717 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay...

  14. Oral breathing and speech disorders in children.

    Science.gov (United States)

    Hitos, Silvia F; Arakaki, Renata; Solé, Dirceu; Weckx, Luc L M

    2013-01-01

    To assess speech alterations in mouth-breathing children, and to correlate them with the respiratory type, etiology, gender, and age. A total of 439 mouth-breathers were evaluated, aged between 4 and 12 years. The presence of speech alterations in children older than 5 years was considered delayed speech development. The observed alterations were tongue interposition (TI), frontal lisp (FL), articulatory disorders (AD), sound omissions (SO), and lateral lisp (LL). The etiology of mouth breathing, gender, age, respiratory type, and speech disorders were correlated. Speech alterations were diagnosed in 31.2% of patients, unrelated to the respiratory type: oral or mixed. Increased frequency of articulatory disorders and more than one speech disorder were observed in males. TI was observed in 53.3% patients, followed by AD in 26.3%, and by FL in 21.9%. The co-occurrence of two or more speech alterations was observed in 24.8% of the children. Mouth breathing can affect speech development, socialization, and school performance. Early detection of mouth breathing is essential to prevent and minimize its negative effects on the overall development of individuals. Copyright © 2013 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  15. OLIVE: Speech-Based Video Retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, Jean-Luc; den Hartog, Jurgen; den Hartog, Jeremy; Netter, Klaus

    1999-01-01

    This paper describes the Olive project which aims to support automated indexing of video material by use of human language technologies. Olive is making use of speech recognition to automatically derive transcriptions of the sound tracks, generating time-coded linguistic elements which serve as the

  16. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2000-10-19

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  17. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2004-04-20

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  18. Phonetic matching of auditory and visual speech develops during childhood: evidence from sine-wave speech.

    Science.gov (United States)

    Baart, Martijn; Bortfeld, Heather; Vroomen, Jean

    2015-01-01

    The correspondence between auditory speech and lip-read information can be detected based on a combination of temporal and phonetic cross-modal cues. Here, we determined the point in developmental time at which children start to effectively use phonetic information to match a speech sound with one of two articulating faces. We presented 4- to 11-year-olds (N=77) with three-syllabic sine-wave speech replicas of two pseudo-words that were perceived as non-speech and asked them to match the sounds with the corresponding lip-read video. At first, children had no phonetic knowledge about the sounds, and matching was thus based on the temporal cues that are fully retained in sine-wave speech. Next, we trained all children to perceive the phonetic identity of the sine-wave speech and repeated the audiovisual (AV) matching task. Only at around 6.5 years of age did the benefit of having phonetic knowledge about the stimuli become apparent, thereby indicating that AV matching based on phonetic cues presumably develops more slowly than AV matching based on temporal cues. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  20. Theater, Speech, Light

    Directory of Open Access Journals (Sweden)

    Primož Vitez

    2011-07-01

    Full Text Available This paper considers a medium as a substantial translator: an intermediary between the producers and receivers of a communicational act. A medium is a material support to the spiritual potential of human sources. If the medium is a support to meaning, then the relations between different media can be interpreted as a space for making sense of these meanings, a generator of sense: it means that the interaction of substances creates an intermedial space that conceives of a contextualization of specific meaningful elements in order to combine them into the sense of a communicational intervention. The theater itself is multimedia. A theatrical event is a communicational act based on a combination of several autonomous structures: text, scenography, light design, sound, directing, literary interpretation, speech, and, of course, the one that contains all of these: the actor in a human body. The actor is a physical and symbolic, anatomic, and emblematic figure in the synesthetic theatrical act because he reunites in his body all the essential principles and components of theater itself. The actor is an audio-visual being, made of kinetic energy, speech, and human spirit. The actor’s body, as a source, instrument, and goal of the theater, becomes an intersection of sound and light. However, theater as intermedial art is no intermediate practice; it must be seen as interposing bodies between conceivers and receivers, between authors and auditors. The body is not self-evident; the body in contemporary art forms is being redefined as a privilege. The art needs bodily dimensions to explore the medial qualities of substances: because it is alive, it returns to studying biology. The fact that theater is an archaic art form is also the purest promise of its future.

  1. WORD BASED TAMIL SPEECH RECOGNITION USING TEMPORAL FEATURE BASED SEGMENTATION

    Directory of Open Access Journals (Sweden)

    A. Akila

    2015-05-01

    Full Text Available Speech recognition system requires segmentation of speech waveform into fundamental acoustic units. Segmentation is a process of decomposing the speech signal into smaller units. Speech segmentation could be done using wavelet, fuzzy methods, Artificial Neural Networks and Hidden Markov Model. Speech segmentation is a process of breaking continuous stream of sound into some basic units like words, phonemes or syllable that could be recognized. Segmentation could be used to distinguish different types of audio signals from large amount of audio data, often referred as audio classification. The speech segmentation can be divided into two categories based on whether the algorithm uses previous knowledge of data to process the speech. The categories are blind segmentation and aided segmentation.The major issues with the connected speech recognition algorithms were the vocabulary size will be larger with variation in the combination of words in the connected speech and the complexity of the algorithm is more to find the best match for the given test pattern. To overcome these issues, the connected speech has to be segmented into words using the attributes of speech. A methodology using the temporal feature Short Term Energy was proposed and compared with an existing algorithm called Dynamic Thresholding segmentation algorithm which uses spectrogram image of the connected speech for segmentation.

  2. Phoneme Compression: processing of the speech signal and effects on speech intelligibility in hearing-Impaired listeners

    NARCIS (Netherlands)

    A. Goedegebure (Andre)

    2005-01-01

    textabstractHearing-aid users often continue to have problems with poor speech understanding in difficult acoustical conditions. Another generally accounted problem is that certain sounds become too loud whereas other sounds are still not audible. Dynamic range compression is a signal processing

  3. Top-down modulation of auditory processing: effects of sound context, musical expertise and attentional focus.

    Science.gov (United States)

    Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D

    2009-10-01

    By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.

  4. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  5. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  6. Residual deposits (residual soil)

    International Nuclear Information System (INIS)

    Khasanov, A.Kh.

    1988-01-01

    Residual soil deposits is accumulation of new formate ore minerals on the earth surface, arise as a result of chemical decomposition of rocks. As is well known, at the hyper genes zone under the influence of different factors (water, carbonic acid, organic acids, oxygen, microorganism activity) passes chemical weathering of rocks. Residual soil deposits forming depends from complex of geologic and climatic factors and also from composition and physical and chemical properties of initial rocks

  7. A Method of Speech Periodicity Enhancement Using Transform-domain Signal Decomposition.

    Science.gov (United States)

    Huang, Huang; Lee, Tan; Kleijn, W Bastiaan; Kong, Ying-Yee

    2015-03-01

    Periodicity is an important property of speech signals. It is the basis of the signal's fundamental frequency and the pitch of voice, which is crucial to speech communication. This paper presents a novel framework of periodicity enhancement for noisy speech. The enhancement is applied to the linear prediction residual of speech. The residual signal goes through a constant-pitch time warping process and two sequential lapped-frequency transforms, by which the periodic component is concentrated in certain transform coefficients. By emphasizing the respective transform coefficients, periodicity enhancement of noisy residual signal is achieved. The enhanced residual signal and estimated linear prediction filter parameters are used to synthesize the output speech. An adaptive algorithm is proposed for adjusting the weights for the periodic and aperiodic components. Effectiveness of the proposed approach is demonstrated via experimental evaluation. It is observed that harmonic structure of the original speech could be properly restored to improve the perceptual quality of enhanced speech.

  8. Adaptive RD Optimized Hybrid Sound Coding

    NARCIS (Netherlands)

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  9. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.

    Science.gov (United States)

    Imai, Mutsumi; Kita, Sotaro

    2014-09-19

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  10. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... amends telecommunications relay services (TRS) mandatory minimum standards applicable to Speech- to...

  11. Recovering With Acquired Apraxia of Speech: The First 2 Years.

    Science.gov (United States)

    Haley, Katarina L; Shafer, Jennifer N; Harmon, Tyson G; Jacks, Adam

    2016-12-01

    This study was intended to document speech recovery for 1 person with acquired apraxia of speech quantitatively and on the basis of her lived experience. The second author sustained a traumatic brain injury that resulted in acquired apraxia of speech. Over a 2-year period, she documented her recovery through 22 video-recorded monologues. We analyzed these monologues using a combination of auditory perceptual, acoustic, and qualitative methods. Recovery was evident for all quantitative variables examined. For speech sound production, the recovery was most prominent during the first 3 months, but slower improvement was evident for many months. Measures of speaking rate, fluency, and prosody changed more gradually throughout the entire period. A qualitative analysis of topics addressed in the monologues was consistent with the quantitative speech recovery and indicated a subjective dynamic relationship between accuracy and rate, an observation that several factors made speech sound production variable, and a persisting need for cognitive effort while speaking. Speech features improved over an extended time, but the recovery trajectories differed, indicating dynamic reorganization of the underlying speech production system. The relationship among speech dimensions should be examined in other cases and in population samples. The combination of quantitative and qualitative analysis methods offers advantages for understanding clinically relevant aspects of recovery.

  12. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  13. Prevalence of high frequency hearing loss consistent with noise exposure among people working with sound systems and general population in Brazil: A cross-sectional study

    Directory of Open Access Journals (Sweden)

    Trevisani Virgínia FM

    2008-05-01

    Full Text Available Abstract Background Music is ever present in our daily lives, establishing a link between humans and the arts through the senses and pleasure. Sound technicians are the link between musicians and audiences or consumers. Recently, general concern has arisen regarding occurrences of hearing loss induced by noise from excessively amplified sound-producing activities within leisure and professional environments. Sound technicians' activities expose them to the risk of hearing loss, and consequently put at risk their quality of life, the quality of the musical product and consumers' hearing. The aim of this study was to measure the prevalence of high frequency hearing loss consistent with noise exposure among sound technicians in Brazil and compare this with a control group without occupational noise exposure. Methods This was a cross-sectional study comparing 177 participants in two groups: 82 sound technicians and 95 controls (non-sound technicians. A questionnaire on music listening habits and associated complaints was applied, and data were gathered regarding the professionals' numbers of working hours per day and both groups' hearing complaint and presence of tinnitus. The participants' ear canals were visually inspected using an otoscope. Hearing assessments were performed (tonal and speech audiometry using a portable digital AD 229 E audiometer funded by FAPESP. Results There was no statistically significant difference between the sound technicians and controls regarding age and gender. Thus, the study sample was homogenous and would be unlikely to lead to bias in the results. A statistically significant difference in hearing loss was observed between the groups: 50% among the sound technicians and 10.5% among the controls. The difference could be addressed to high sound levels. Conclusion The sound technicians presented a higher prevalence of high frequency hearing loss consistent with noise exposure than did the general population, although

  14. Temporal modulations in speech and music.

    Science.gov (United States)

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Assessment of language impact to speech privacy in closed offices

    Science.gov (United States)

    Ma, Yong Ma; Caswell, Daryl J.; Dai, Liming; Goodchild, Jim T.

    2002-11-01

    Speech privacy is the opposite concept of speech intelligibility and can be assessed by the predictors of speech intelligibility. Based on the existing standards and the research to date, most objective assessments for speech privacy and speech intelligibility, such as articulation index (AI) or speech intelligibility index (SII), speech transmission index (STI), and sound early-to-late ratio (C50), are evaluated by the subjective measurements. However, these subject measurements are based on the studies of English or the other Western languages. The language impact to speech privacy has been overseen. It is therefore necessary to study the impact of different languages and accents in multiculturalism environments to speech privacy. In this study, subjective measurements were conducted in closed office environments by using English and a tonal language, Mandarin. Detailed investigations on the language impact to speech privacy were carried out with the two languages. The results of this study reveal the significant evaluation variations in speech privacy when different languages are used. The subjective measurement results obtained in this study were also compared with the objective measurement employing articulation indices.

  16. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...

  17. Musical background not associated with self-perceived hearing performance or speech perception in postlingual cochlear-implant users

    NARCIS (Netherlands)

    Fuller, Christina; Free, Rolien; Maat, Bert; Baskent, Deniz

    In normal-hearing listeners, musical background has been observed to change the sound representation in the auditory system and produce enhanced performance in some speech perception tests. Based on these observations, it has been hypothesized that musical background can influence sound and speech

  18. Sound settlements

    DEFF Research Database (Denmark)

    Duelund Mortensen, Peder

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  19. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  20. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  1. Plasticity in the human speech motor system drives changes in speech perception.

    Science.gov (United States)

    Lametti, Daniel R; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M; Ostry, David J

    2014-07-30

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. Copyright © 2014 the authors 0270-6474/14/3410339-08$15.00/0.

  2. Rate and rhythm control strategies for apraxia of speech in nonfluent primary progressive aphasia.

    Science.gov (United States)

    Beber, Bárbara Costa; Berbert, Monalise Costa Batista; Grawer, Ruth Siqueira; Cardoso, Maria Cristina de Almeida Freitas

    2018-01-01

    The nonfluent/agrammatic variant of primary progressive aphasia is characterized by apraxia of speech and agrammatism. Apraxia of speech limits patients' communication due to slow speaking rate, sound substitutions, articulatory groping, false starts and restarts, segmentation of syllables, and increased difficulty with increasing utterance length. Speech and language therapy is known to benefit individuals with apraxia of speech due to stroke, but little is known about its effects in primary progressive aphasia. This is a case report of a 72-year-old, illiterate housewife, who was diagnosed with nonfluent primary progressive aphasia and received speech and language therapy for apraxia of speech. Rate and rhythm control strategies for apraxia of speech were trained to improve initiation of speech. We discuss the importance of these strategies to alleviate apraxia of speech in this condition and the future perspectives in the area.

  3. [Evolution of speech and hearing].

    Science.gov (United States)

    Pitkäranta, Anne

    2009-01-01

    Actual spoken language of man developed only approximately 200,000 to 100,000 years ago. As a result of natural selection, man has developed hearing, which is most sensitive in the frequency regions of 200 to 4000 Hz, corresponding to those of spoken sounds. Functional hearing has been one of the prerequisites for the development of speech, although according to current opinion the language itself may have evolved by mimicking gestures with the so-called mirror neurons. Due to hearing, gesticulation was no longer necessary, and the hands became available for other purposes.

  4. Acoustic assessment of speech privacy curtains in two nursing units.

    Science.gov (United States)

    Pope, Diana S; Miller-Klein, Erik T

    2016-01-01

    Hospitals have complex soundscapes that create challenges to patient care. Extraneous noise and high reverberation rates impair speech intelligibility, which leads to raised voices. In an unintended spiral, the increasing noise may result in diminished speech privacy, as people speak loudly to be heard over the din. The products available to improve hospital soundscapes include construction materials that absorb sound (acoustic ceiling tiles, carpet, wall insulation) and reduce reverberation rates. Enhanced privacy curtains are now available and offer potential for a relatively simple way to improve speech privacy and speech intelligibility by absorbing sound at the hospital patient's bedside. Acoustic assessments were performed over 2 days on two nursing units with a similar design in the same hospital. One unit was built with the 1970s' standard hospital construction and the other was newly refurbished (2013) with sound-absorbing features. In addition, we determined the effect of an enhanced privacy curtain versus standard privacy curtains using acoustic measures of speech privacy and speech intelligibility indexes. Privacy curtains provided auditory protection for the patients. In general, that protection was increased by the use of enhanced privacy curtains. On an average, the enhanced curtain improved sound absorption from 20% to 30%; however, there was considerable variability, depending on the configuration of the rooms tested. Enhanced privacy curtains provide measureable improvement to the acoustics of patient rooms but cannot overcome larger acoustic design issues. To shorten reverberation time, additional absorption, and compact and more fragmented nursing unit floor plate shapes should be considered.

  5. Acoustic assessment of speech privacy curtains in two nursing units

    Directory of Open Access Journals (Sweden)

    Diana S Pope

    2016-01-01

    Full Text Available Hospitals have complex soundscapes that create challenges to patient care. Extraneous noise and high reverberation rates impair speech intelligibility, which leads to raised voices. In an unintended spiral, the increasing noise may result in diminished speech privacy, as people speak loudly to be heard over the din. The products available to improve hospital soundscapes include construction materials that absorb sound (acoustic ceiling tiles, carpet, wall insulation and reduce reverberation rates. Enhanced privacy curtains are now available and offer potential for a relatively simple way to improve speech privacy and speech intelligibility by absorbing sound at the hospital patient′s bedside. Acoustic assessments were performed over 2 days on two nursing units with a similar design in the same hospital. One unit was built with the 1970s′ standard hospital construction and the other was newly refurbished (2013 with sound-absorbing features. In addition, we determined the effect of an enhanced privacy curtain versus standard privacy curtains using acoustic measures of speech privacy and speech intelligibility indexes. Privacy curtains provided auditory protection for the patients. In general, that protection was increased by the use of enhanced privacy curtains. On an average, the enhanced curtain improved sound absorption from 20% to 30%; however, there was considerable variability, depending on the configuration of the rooms tested. Enhanced privacy curtains provide measureable improvement to the acoustics of patient rooms but cannot overcome larger acoustic design issues. To shorten reverberation time, additional absorption, and compact and more fragmented nursing unit floor plate shapes should be considered.

  6. Hemispheric asymmetries in speech perception: sense, nonsense and modulations.

    Directory of Open Access Journals (Sweden)

    Stuart Rosen

    Full Text Available The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding 'rapid temporal processing'.A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET was used to compare which brain regions were active when participants listened to the different sounds.Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features.

  7. Speech and music perception with the new fine structure speech coding strategy: preliminary results.

    Science.gov (United States)

    Arnoldner, Christoph; Riss, Dominik; Brunner, Markus; Durisin, Martin; Baumgartner, Wolf-Dieter; Hamzavi, Jafar-Sasan

    2007-12-01

    Taking into account the excellent results with significant improvements in the speech tests and the very high satisfaction of the patients using the new strategy, this first implementation of a fine structure strategy could offer a new quality of hearing with cochlear implants (CIs). This study consisted of an intra-individual comparison of speech recognition, music perception and patient preference when subjects used two different speech coding strategies with a MedEl Pulsar CI: continuous interleaved sampling (CIS) and the new fine structure processing (FSP) strategy. In contrast to envelope-based strategies, the FSP strategy also delivers subtle pitch and timing differences of sound to the user and is thereby supposed to enhance speech perception in noise and increase the quality of music perception. This was a prospective study assessing performance with two different speech coding strategies. The setting was a CI programme at an academic tertiary referral centre. Fourteen post-lingually deaf patients using a MedEl Pulsar CI with a mean CI experience of 0.98 years were supplied with the new FSP speech coding strategy. Subjects consecutively used the two different speech coding strategies. Speech and music tests were performed with the previously fitted CIS strategy, immediately after fitting with the new FSP strategy and 4, 8 and 12 weeks later. The main outcome measures were individual performance and subjective assessment of two different speech processors. Speech and music test scores improved statistically significantly after conversion from CIS to FSP strategy. Twelve of 14 patients preferred the new FSP speech processing strategy over the CIS strategy.

  8. Perceptual learning of speech under optimal and adverse conditions.

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G

    2014-02-01

    Humans have a remarkable ability to understand spoken language despite the large amount of variability in speech. Previous research has shown that listeners can use lexical information to guide their interpretation of atypical sounds in speech (Norris, McQueen, & Cutler, 2003). This kind of lexically induced perceptual learning enables people to adjust to the variations in utterances due to talker-specific characteristics, such as individual identity and dialect. The current study investigated perceptual learning in two optimal conditions: conversational speech (Experiment 1) versus clear speech (Experiment 2), and three adverse conditions: noise (Experiment 3a) versus two cognitive loads (Experiments 4a and 4b). Perceptual learning occurred in the two optimal conditions and in the two cognitive load conditions, but not in the noise condition. Furthermore, perceptual learning occurred only in the first of two sessions for each participant, and only for atypical /s/ sounds and not for atypical /f/ sounds. This pattern of learning and nonlearning reflects a balance between flexibility and stability that the speech system must have to deal with speech variability in the diverse conditions that speech is encountered. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. Outcomes of Palatal Lift Prosthesis on Dysarthric Speech.

    Science.gov (United States)

    Alfwaress, Firas S D; Bibars, Abdel Rahim; Hamasha, Abedelhadi; Maaitah, Emad Al

    2017-01-01

    This study was designed to investigate the effect of palatal lift prosthesis (PLP) on the speech of individuals with different types of dysarthria. Thirty (19 males and 11 females) native speakers of Jordanian Arabic with dysarthria participated in the study. The age of the participants ranged from 8 to 67 years with an average of 34.1 years. Traumatic brain injury was the most common etiology of dysarthria among 12 participants, stroke among 11, multiple sclerosis among 3, and pseudobulbar palsy among 2; 1 participant had Parkinson disease, and another participant had amyotrophic lateral sclerosis. Five acoustic and aerodynamic measures were evaluated to determine the speech outcomes including nasalance scores, sequential motion rate, speech rate, vital capacity, and sound pressure level. The acoustic measures were obtained from the participants in PLP-out and PLP-in conditions. Results showed statistically significant decrease in the nasalance scores of the syllable repetition, vowel prolongation, and sentence repetition tasks in the PLP-in condition below the 28% cutoff score. Furthermore, results revealed statistically significant increase in sequential motion rate, speech rate, vital capacity, and sound pressure level (P = 0.000). The use of PLP is an effective treatment option of dysarthric speech. Besides nasalance scores, the sequential motion rate, speech rate, vital capacity, and sound pressure level are considered reliable speech measures that may be used to evaluate the effect of PLP on dysarthria.

  10. Second Sound

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 6. Second Sound - The Role of Elastic Waves. R Srinivasan. General Article Volume 4 Issue 6 June 1999 pp 15-19. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/004/06/0015-0019 ...

  11. Second Sound

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 3. Second Sound - Waves of Entropy and Temperature. R Srinivasan. General Article Volume 4 Issue 3 March 1999 pp 16-24. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/004/03/0016-0024 ...

  12. Second Sound

    Indian Academy of Sciences (India)

    as a function of q is called adispersion curve. Landau postulated. R Srinivasan is a Visiting. Professor at the Raman. Research Institute after retiring as .... Second sound was seen in solid 4He crystals by Ackermann and others in 1966. 4He will not solidify even at absolute zero of temperature unless one applies a pressure ...

  13. Second Sound

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 6. Second Sound - The Role of Elastic Waves. R Srinivasan. General Article Volume 4 Issue 6 June 1999 pp 15-19. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/004/06/0015-0019 ...

  14. Sound engineer

    CERN Document Server

    Mara, Wil

    2015-01-01

    "Readers will learn what it takes to succeed as a sound engineer. The book also explains the necessary educational steps, useful character traits, potential hazards, and daily job tasks related to this career. Sidebars include thought-provoking trivia. Questions in the backmatter ask for text-dependent analysis. Photos, a glossary, and additional resources are included."-- Provided by publisher.

  15. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  16. Masking Property Based Residual Acoustic Echo Cancellation for Hands-Free Communication in Automobile Environment

    Science.gov (United States)

    Lee, Yoonjae; Jeong, Seokyeong; Ko, Hanseok

    A residual acoustic echo cancellation method that employs the masking property is proposed to enhance the speech quality of hands-free communication devices in an automobile environment. The conventional masking property is employed for speech enhancement using the masking threshold of the desired clean speech signal. In this Letter, either the near-end speech or residual noise is selected as the desired signal according to the double-talk detector. Then, the residual echo signal is masked by the desired signal (masker). Experiments confirm the effectiveness of the proposed method by deriving the echo return loss enhancement and by examining speech waveforms and spectrograms.

  17. Changes in Voice Onset Time and Motor Speech Skills in Children following Motor Speech Therapy: Evidence from /pa/ productions

    Science.gov (United States)

    Yu, Vickie Y.; Kadis, Darren S.; Oh, Anna; Goshulak, Debra; Namasivayam, Aravind; Pukonen, Margit; Kroll, Robert; De Nil, Luc F.; Pang, Elizabeth W.

    2016-01-01

    This study evaluated changes in motor speech control and inter-gestural coordination for children with speech sound disorders (SSD) subsequent to PROMPT (Prompts for Restructuring Oral Muscular Phonetic Targets) intervention. We measured the distribution patterns of voice onset time (VOT) for a voiceless stop (/p/) to examine the changes in inter-gestural coordination. Two standardized tests were used (VMPAC, GFTA-2) to assess the changes in motor speech skills and articulation. Data showed positive changes in patterns of VOT with a lower pattern of variability. All children showed significantly higher scores for VMPAC, but only some children showed higher scores for GFTA-2. Results suggest that the proprioceptive feedback provided through PROMPT had a positive influence on motor speech control and inter-gestural coordination in voicing behavior. This set of VOT data for children with SSD adds to our understanding of the speech characteristics underlying motor speech control. Directions for future studies are discussed. PMID:24446799

  18. Optimizing acoustical conditions for speech intelligibility in classrooms

    Science.gov (United States)

    Yang, Wonyoung

    High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with

  19. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  20. Standardization of Speech Corpus

    Directory of Open Access Journals (Sweden)

    Ai-jun Li

    2007-12-01

    Full Text Available Speech corpus is the basis for analyzing the characteristics of speech signals and developing speech synthesis and recognition systems. In China, almost all speech research and development affiliations are developing their own speech corpora. We have so many different kinds numbers of Chinese speech corpora that it is important to be able to conveniently share these speech corpora to avoid wasting time and money and to make research work more efficient. The primary goal of this research is to find a standard scheme which can make the corpus be established more efficiently and be used or shared more easily. A huge speech corpus on 10 regional accented Chinese, RASC863 (a Regional Accent Speech Corpus funded by National 863 Project will be exemplified to illuminate the standardization of speech corpus production.

  1. Ultrasound biofeedback treatment for persisting childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Brick, Nickole; Landi, Nicole

    2013-11-01

    The purpose of this study was to evaluate the efficacy of a treatment program that includes ultrasound biofeedback for children with persisting speech sound errors associated with childhood apraxia of speech (CAS). Six children ages 9-15 years participated in a multiple baseline experiment for 18 treatment sessions during which treatment focused on producing sequences involving lingual sounds. Children were cued to modify their tongue movements using visual feedback from real-time ultrasound images. Probe data were collected before, during, and after treatment to assess word-level accuracy for treated and untreated sound sequences. As participants reached preestablished performance criteria, new sequences were introduced into treatment. All participants met the performance criterion (80% accuracy for 2 consecutive sessions) on at least 2 treated sound sequences. Across the 6 participants, performance criterion was met for 23 of 31 treated sequences in an average of 5 sessions. Some participants showed no improvement in untreated sequences, whereas others showed generalization to untreated sequences that were phonetically similar to the treated sequences. Most gains were maintained 2 months after the end of treatment. The percentage of phonemes correct increased significantly from pretreatment to the 2-month follow-up. A treatment program including ultrasound biofeedback is a viable option for improving speech sound accuracy in children with persisting speech sound errors associated with CAS.

  2. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  3. The Relationship Between Speech, Language, and Phonological Awareness in Preschool-Age Children With Developmental Disabilities.

    Science.gov (United States)

    Barton-Hulsey, Andrea; Sevcik, Rose A; Romski, MaryAnn

    2018-03-22

    A number of intrinsic factors, including expressive speech skills, have been suggested to place children with developmental disabilities at risk for limited development of reading skills. This study examines the relationship between these factors, speech ability, and children's phonological awareness skills. A nonexperimental study design was used to examine the relationship between intrinsic skills of speech, language, print, and letter-sound knowledge to phonological awareness in 42 children with developmental disabilities between the ages of 48 and 69 months. Hierarchical multiple regression was done to determine if speech ability accounted for a unique amount of variance in phonological awareness skill beyond what would be expected by developmental skills inclusive of receptive language and print and letter-sound knowledge. A range of skill in all areas of direct assessment was found. Children with limited speech were found to have emerging skills in print knowledge, letter-sound knowledge, and phonological awareness. Speech ability did not predict a significant amount of variance in phonological awareness beyond what would be expected by developmental skills of receptive language and print and letter-sound knowledge. Children with limited speech ability were found to have receptive language and letter-sound knowledge that supported the development of phonological awareness skills. This study provides implications for practitioners and researchers concerning the factors related to early reading development in children with limited speech ability and developmental disabilities.

  4. Neurophysiological Evidence That Musical Training Influences the Recruitment of Right Hemispheric Homologues for Speech Perception

    Directory of Open Access Journals (Sweden)

    McNeel Gordon Jantzen

    2014-03-01

    Full Text Available Musicians have a more accurate temporal and tonal representation of auditory stimuli than their non-musician counterparts (Kraus & Chandrasekaran, 2010; Parbery-Clark, Skoe, & Kraus, 2009; Zendel & Alain, 2008; Musacchia, Sams, Skoe, & Kraus, 2007. Musicians who are adept at the production and perception of music are also more sensitive to key acoustic features of speech such as voice onset timing and pitch. Together, these data suggest that musical training may enhance the processing of acoustic information for speech sounds. In the current study, we sought to provide neural evidence that musicians process speech and music in a similar way. We hypothesized that for musicians, right hemisphere areas traditionally associated with music are also engaged for the processing of speech sounds. In contrast we predicted that in non-musicians processing of speech sounds would be localized to traditional left hemisphere language areas. Speech stimuli differing in voice onset time was presented using a dichotic listening paradigm. Subjects either indicated aural location for a specified speech sound or identified a specific speech sound from a directed aural location. Musical training effects and organization of acoustic features were reflected by activity in source generators of the P50. This included greater activation of right middle temporal gyrus (MTG and superior temporal gyrus (STG in musicians. The findings demonstrate recruitment of right hemisphere in musicians for discriminating speech sounds and a putative broadening of their language network. Musicians appear to have an increased sensitivity to acoustic features and enhanced selective attention to temporal features of speech that is facilitated by musical training and supported, in part, by right hemisphere homologues of established speech processing regions of the brain.

  5. The selective role of premotor cortex in speech perception: a contribution to phoneme judgements but not speech comprehension.

    Science.gov (United States)

    Krieger-Redwood, Katya; Gaskell, M Gareth; Lindsay, Shane; Jefferies, Elizabeth

    2013-12-01

    Several accounts of speech perception propose that the areas involved in producing language are also involved in perceiving it. In line with this view, neuroimaging studies show activation of premotor cortex (PMC) during phoneme judgment tasks; however, there is debate about whether speech perception necessarily involves motor processes, across all task contexts, or whether the contribution of PMC is restricted to tasks requiring explicit phoneme awareness. Some aspects of speech processing, such as mapping sounds onto meaning, may proceed without the involvement of motor speech areas if PMC specifically contributes to the manipulation and categorical perception of phonemes. We applied TMS to three sites-PMC, posterior superior temporal gyrus, and occipital pole-and for the first time within the TMS literature, directly contrasted two speech perception tasks that required explicit phoneme decisions and mapping of speech sounds onto semantic categories, respectively. TMS to PMC disrupted explicit phonological judgments but not access to meaning for the same speech stimuli. TMS to two further sites confirmed that this pattern was site specific and did not reflect a generic difference in the susceptibility of our experimental tasks to TMS: stimulation of pSTG, a site involved in auditory processing, disrupted performance in both language tasks, whereas stimulation of occipital pole had no effect on performance in either task. These findings demonstrate that, although PMC is important for explicit phonological judgments, crucially, PMC is not necessary for mapping speech onto meanings.

  6. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  7. SUSTAINABILITY IN THE BOWELS OF SPEECHES

    Directory of Open Access Journals (Sweden)

    Jadir Mauro Galvao

    2012-10-01

    Full Text Available The theme of sustainability has not yet achieved the feat of make up as an integral part the theoretical medley that brings out our most everyday actions, often visits some of our thoughts and permeates many of our speeches. The big event of 2012, the meeting gathered Rio +20 glances from all corners of the planet around that theme as burning, but we still see forward timidly. Although we have no very clear what the term sustainability closes it does not sound quite strange. Associate with things like ecology, planet, wastes emitted by smokestacks of factories, deforestation, recycling and global warming must be related, but our goal in this article is the least of clarifying the term conceptually and more try to observe as it appears in speeches of such conference. When the competent authorities talk about sustainability relate to what? We intend to investigate the lines and between the lines of these speeches, any assumptions associated with the term. Therefore we will analyze the speech of the People´s Summit, the opening speech of President Dilma and emblematic speech of the President of Uruguay, José Pepe Mujica.

  8. Segmentation of Speech and Humming in Vocal Input

    Directory of Open Access Journals (Sweden)

    A. J. Sporka

    2012-09-01

    Full Text Available Non-verbal vocal interaction (NVVI is an interaction method in which sounds other than speech produced by a human are used, such as humming. NVVI complements traditional speech recognition systems with continuous control. In order to combine the two approaches (e.g. "volume up, mmm" it is necessary to perform a speech/NVVI segmentation of the input sound signal. This paper presents two novel methods of speech and humming segmentation. The first method is based on classification of MFCC and RMS parameters using a neural network (MFCC method, while the other method computes volume changes in the signal (IAC method. The two methods are compared using a corpus collected from 13 speakers. The results indicate that the MFCC method outperforms IAC in terms of accuracy, precision, and recall.

  9. SOUNDS OF MODERN TALK AIR

    Directory of Open Access Journals (Sweden)

    Bysko Maxim V.

    2012-12-01

    Full Text Available The author examines the role of broadcasting from inception to the present day; he means a new historical round of mass media that links modern radio with 1920-30s radio. Art genres of broadcasting and TV news covered in the direct synthesis with information radio genres. In this case, a more organized and balanced sounding of contemporary information radio (order of texts, music-speech structure, and sound design has a more limited, local space in society. Radio is not only within national boundaries, but also within cultural, subcultural, narrow consumer boundaries. Hence the clear dominance of the road radio audience, as well as a return to the private broadcasting (mobiles, web-channels, podcasts.

  10. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    Environmental sound archives - casual recordings of people's daily life - are easily collected by MPS players or camcorders with low cost and high reliability, and shared in the web-sites. There are two kinds of user generated recordings we would like to be able to handle in this thesis: Continuous long-duration personal audio and Soundtracks of short consumer video clips. These environmental recordings contain a lot of useful information (semantic concepts) related with activity, location, occasion and content. As a consequence, the environment archives present many new opportunities for the automatic extraction of information that can be used in intelligent browsing systems. This thesis proposes systems for detecting these interesting concepts on a collection of these real-world recordings. The first system is to segment and label personal audio archives - continuous recordings of an individual's everyday experiences - into 'episodes' (relatively consistent acoustic situations lasting a few minutes or more) using the Bayesian Information Criterion and spectral clustering. The second system is for identifying regions of speech or music in the kinds of energetic and highly-variable noise present in this real-world sound. Motivated by psychoacoustic evidence that pitch is crucial in the perception and organization of sound, we develop a noise-robust pitch detection algorithm to locate speech or music-like regions. To avoid false alarms resulting from background noise with strong periodic components (such as air-conditioning), a new scheme is added in order to suppress these noises in the domain of autocorrelogram. In addition, the third system is to automatically detect a large set of interesting semantic concepts; which we chose for being both informative and useful to users, as well as being technically feasible. These 25 concepts are associated with people's activities, locations, occasions, objects, scenes and sounds, and are based on a large collection of

  11. Visual influences on speech perception in children with autism.

    Science.gov (United States)

    Iarocci, Grace; Rombough, Adrienne; Yager, Jodi; Weeks, Daniel J; Chua, Romeo

    2010-07-01

    The bimodal perception of speech sounds was examined in children with autism as compared to mental age-matched typically developing (TD) children. A computer task was employed wherein only the mouth region of the face was displayed and children reported what they heard or saw when presented with consonant-vowel sounds in unimodal auditory condition, unimodal visual condition, and a bimodal condition. Children with autism showed less visual influence and more auditory influence on their bimodal speech perception as compared to their TD peers, largely due to significantly worse performance in the unimodal visual condition (lip reading). Children with autism may not benefit to the same extent as TD children from visual cues such as lip reading that typically support the processing of speech sounds. The disadvantage in lip reading may be detrimental when auditory input is degraded, for example in school settings, whereby speakers are communicating in frequently noisy environments.

  12. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  13. The neural basis of sublexical speech and corresponding nonspeech processing: a combined EEG-MEG study.

    Science.gov (United States)

    Kuuluvainen, Soila; Nevalainen, Päivi; Sorokin, Alexander; Mittag, Maria; Partanen, Eino; Putkinen, Vesa; Seppänen, Miia; Kähkönen, Seppo; Kujala, Teija

    2014-03-01

    We addressed the neural organization of speech versus nonspeech sound processing by investigating preattentive cortical auditory processing of changes in five features of a consonant-vowel syllable (consonant, vowel, sound duration, frequency, and intensity) and their acoustically matched nonspeech counterparts in a simultaneous EEG-MEG recording of mismatch negativity (MMN/MMNm). Overall, speech-sound processing was enhanced compared to nonspeech sound processing. This effect was strongest for changes which affect word meaning (consonant, vowel, and vowel duration) in the left and for the vowel identity change in the right hemisphere also. Furthermore, in the right hemisphere, speech-sound frequency and intensity changes were processed faster than their nonspeech counterparts, and there was a trend for speech-enhancement in frequency processing. In summary, the results support the proposed existence of long-term memory traces for speech sounds in the auditory cortices, and indicate at least partly distinct neural substrates for speech and nonspeech sound processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Parent-child interaction in motor speech therapy.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Jethava, Vibhuti; Pukonen, Margit; Huynh, Anna; Goshulak, Debra; Kroll, Robert; van Lieshout, Pascal

    2018-01-01

    This study measures the reliability and sensitivity of a modified Parent-Child Interaction Observation scale (PCIOs) used to monitor the quality of parent-child interaction. The scale is part of a home-training program employed with direct motor speech intervention for children with speech sound disorders. Eighty-four preschool age children with speech sound disorders were provided either high- (2×/week/10 weeks) or low-intensity (1×/week/10 weeks) motor speech intervention. Clinicians completed the PCIOs at the beginning, middle, and end of treatment. Inter-rater reliability (Kappa scores) was determined by an independent speech-language pathologist who assessed videotaped sessions at the midpoint of the treatment block. Intervention sensitivity of the scale was evaluated using a Friedman test for each item and then followed up with Wilcoxon pairwise comparisons where appropriate. We obtained fair-to-good inter-rater reliability (Kappa = 0.33-0.64) for the PCIOs using only video-based scoring. Child-related items were more strongly influenced by differences in treatment intensity than parent-related items, where a greater number of sessions positively influenced parent learning of treatment skills and child behaviors. The adapted PCIOs is reliable and sensitive to monitor the quality of parent-child interactions in a 10-week block of motor speech intervention with adjunct home therapy. Implications for rehabilitation Parent-centered therapy is considered a cost effective method of speech and language service delivery. However, parent-centered models may be difficult to implement for treatments such as developmental motor speech interventions that require a high degree of skill and training. For children with speech sound disorders and motor speech difficulties, a translated and adapted version of the parent-child observation scale was found to be sufficiently reliable and sensitive to assess changes in the quality of the parent-child interactions during

  15. Childhood apraxia of speech and multiple phonological disorders in Cairo-Egyptian Arabic speaking children: language, speech, and oro-motor differences.

    Science.gov (United States)

    Aziz, Azza Adel; Shohdi, Sahar; Osman, Dalia Mostafa; Habib, Emad Iskander

    2010-06-01

    Childhood apraxia of speech is a neurological childhood speech-sound disorder in which the precision and consistency of movements underlying speech are impaired in the absence of neuromuscular deficits. Children with childhood apraxia of speech and those with multiple phonological disorder share some common phonological errors that can be misleading in diagnosis. This study posed a question about a possible significant difference in language, speech and non-speech oral performances between children with childhood apraxia of speech, multiple phonological disorder and normal children that can be used for a differential diagnostic purpose. 30 pre-school children between the ages of 4 and 6 years served as participants. Each of these children represented one of 3 possible subject-groups: Group 1: multiple phonological disorder; Group 2: suspected cases of childhood apraxia of speech; Group 3: control group with no communication disorder. Assessment procedures included: parent interviews; testing of non-speech oral motor skills and testing of speech skills. Data showed that children with suspected childhood apraxia of speech showed significantly lower language score only in their expressive abilities. Non-speech tasks did not identify significant differences between childhood apraxia of speech and multiple phonological disorder groups except for those which required two sequential motor performances. In speech tasks, both consonant and vowel accuracy were significantly lower and inconsistent in childhood apraxia of speech group than in the multiple phonological disorder group. Syllable number, shape and sequence accuracy differed significantly in the childhood apraxia of speech group than the other two groups. In addition, children with childhood apraxia of speech showed greater difficulty in processing prosodic features indicating a clear need to address these variables for differential diagnosis and treatment of children with childhood apraxia of speech. Copyright (c

  16. The Role of Somatosensory Information in Speech Perception: Imitation Improves Recognition of Disordered Speech.

    Science.gov (United States)

    Borrie, Stephanie A; Schäfer, Martina C M

    2015-12-01

    Perceptual learning paradigms involving written feedback appear to be a viable clinical tool to reduce the intelligibility burden of dysarthria. The underlying theoretical assumption is that pairing the degraded acoustics with the intended lexical targets facilitates a remapping of existing mental representations in the lexicon. This study investigated whether ties to mental representations can be strengthened by way of a somatosensory motor trace. Following an intelligibility pretest, 100 participants were assigned to 1 of 5 experimental groups. The control group received no training, but the other 4 groups received training with dysarthric speech under conditions involving a unique combination of auditory targets, written feedback, and/or a vocal imitation task. All participants then completed an intelligibility posttest. Training improved intelligibility of dysarthric speech, with the largest improvements observed when the auditory targets were accompanied by both written feedback and an imitation task. Further, a significant relationship between intelligibility improvement and imitation accuracy was identified. This study suggests that somatosensory information can strengthen the activation of speech sound maps of dysarthric speech. The findings, therefore, implicate a bidirectional relationship between speech perception and speech production as well as advance our understanding of the mechanisms that underlie perceptual learning of degraded speech.

  17. The effects of auditory contrast tuning upon speech intelligibility

    OpenAIRE

    Nathaniel J Killian; Nathaniel J Killian; Paul Watkins; Paul Watkins; Lisa Davidson; Dennis L. Barbour

    2016-01-01

    We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provi...

  18. The Effects of Auditory Contrast Tuning upon Speech Intelligibility

    OpenAIRE

    Killian, Nathan J.; Watkins, Paul V.; Davidson, Lisa S.; Barbour, Dennis L.

    2016-01-01

    We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provi...

  19. Speech-induced suppression of evoked auditory fields in children who stutter.

    Science.gov (United States)

    Beal, Deryk S; Quraan, Maher A; Cheyne, Douglas O; Taylor, Margot J; Gracco, Vincent L; De Nil, Luc F

    2011-02-14

    Auditory responses to speech sounds that are self-initiated are suppressed compared to responses to the same speech sounds during passive listening. This phenomenon is referred to as speech-induced suppression, a potentially important feedback-mediated speech-motor control process. In an earlier study, we found that both adults who do and do not stutter demonstrated a reduced amplitude of the auditory M50 and M100 responses to speech during active production relative to passive listening. It is unknown if auditory responses to self-initiated speech-motor acts are suppressed in children or if the phenomenon differs between children who do and do not stutter. As stuttering is a developmental speech disorder, examining speech-induced suppression in children may identify possible neural differences underlying stuttering close to its time of onset. We used magnetoencephalography to determine the presence of speech-induced suppression in children and to characterize the properties of speech-induced suppression in children who stutter. We examined the auditory M50 as this was the earliest robust response reproducible across our child participants and the most likely to reflect a motor-to-auditory relation. Both children who do and do not stutter demonstrated speech-induced suppression of the auditory M50. However, children who stutter had a delayed auditory M50 peak latency to vowel sounds compared to children who do not stutter indicating a possible deficiency in their ability to efficiently integrate auditory speech information for the purpose of establishing neural representations of speech sounds. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Brain stem evoked response to forward and reversed speech in humans.

    Science.gov (United States)

    Galbraith, Gary C; Amaya, Elizabeth M; de Rivera, Jacinta M Diaz; Donan, Namee M; Duong, Mylien T; Hsu, Jeffrey N; Tran, Kim; Tsang, Lian P

    2004-09-15

    Speech stimuli played in reverse are perceived as unfamiliar and alien-sounding, even though phoneme duration and fundamental voicing frequency are preserved. Although language perception ultimately resides in the neocortex, the brain stem plays a vital role in processing auditory information, including speech. The present study measured brain stem frequency-following responses (FFR) evoked by forward and reverse speech stimuli recorded from electrodes oriented horizontally and vertically to measure signals with putative origins in auditory nerve and rostral brain stem, respectively. The vertical FFR showed increased amplitude due to forward speech. It is concluded that familiar phonological and prosodic properties of forward speech selectively activate central brain stem neurons.

  1. Hearing aid processing of loud speech and noise signals: Consequences for loudness perception and listening comfort

    DEFF Research Database (Denmark)

    Schmidt, Erik

    2007-01-01

    research -for example investigations of loudness perception in hearing impaired listeners. Most research has been focused on speech and sounds at medium input-levels (e.g., 60-65 dB SPL). It is well documented that for speech at conversational levels, hearing aid-users prefer the signal to be amplified......, such prescriptions are based mainly on logic, as there is limited evidence on what type of amplification is best for these input-levels. The focus of the PhD-project has been on hearing aid processing of loud speech and noise signals. Previous research, investigating the preferred listening levels for soft and loud......Hearing aid processing of loud speech and noise signals: Consequences for loudness perception and listening comfort. Sound processing in hearing aids is determined by the fitting rule. The fitting rule describes how the hearing aid should amplify speech and sounds in the surroundings...

  2. From air oscillations to music and speech: functional magnetic resonance imaging evidence for fine-tuned neural networks in audition.

    Science.gov (United States)

    Tervaniemi, Mari; Szameitat, André J; Kruck, Stefanie; Schröger, Erich; Alter, Kai; De Baene, Wouter; Friederici, Angela D

    2006-08-23

    In the auditory modality, music and speech have high informational and emotional value for human beings. However, the degree of the functional specialization of the cortical and subcortical areas in encoding music and speech sounds is not yet known. We investigated the functional specialization of the human auditory system in processing music and speech by functional magnetic resonance imaging recordings. During recordings, the subjects were presented with saxophone sounds and pseudowords /ba:ba/ with comparable acoustical content. Our data show that areas encoding music and speech sounds differ in the temporal and frontal lobes. Moreover, slight variations in sound pitch and duration activated thalamic structures differentially. However, this was the case with speech sounds only while no such effect was evidenced with music sounds. Thus, our data reveal the existence of a functional specialization of the human brain in accurately representing sound information at both cortical and subcortical areas. They indicate that not only the sound category (speech/music) but also the sound parameter (pitch/duration) can be selectively encoded.

  3. Prevalence of Speech Disorders in Arak Primary School Students, 2014-2015

    Directory of Open Access Journals (Sweden)

    Abdoreza Yavari

    2016-09-01

    Full Text Available Abstract Background: The speech disorders may produce irreparable damage to childs speech and language development in the psychosocial view. The voice, speech sound production and fluency disorders are speech disorders, that may result from delay or impairment in speech motor control mechanism, central neuron system disorders, improper language stimulation or voice abuse. Materials and Methods: This study examined the prevalence of speech disorders in 1393 Arakian students at 1 to 6th grades of primary school. After collecting continuous speech samples, picture description, passage reading and phonetic test, we recorded the pathological signs of stuttering, articulation disorder and voice disorders in a special sheet. Results: The prevalence of articulation, voice and stuttering disorders was 8%, 3.5% and%1 and the prevalence of speech disorders was 11.9%. The prevalence of speech disorders was decreasing with increasing of student’s grade. 12.2% of boy students and 11.7% of girl students of primary school in Arak had speech disorders. Conclusion: The prevalence of speech disorders of primary school students in Arak is similar to the prevalence of speech disorders in Kermanshah, but the prevalence of speech disorders in this research is smaller than many similar researches in Iran. It seems that racial and cultural diversity has some effect on increasing the prevalence of speech disorders in Arak city.

  4. Mapping Speech Spectra from Throat Microphone to Close-Speaking Microphone: A Neural Network Approach

    Directory of Open Access Journals (Sweden)

    B. Yegnanarayana

    2007-01-01

    Full Text Available Speech recorded from a throat microphone is robust to the surrounding noise, but sounds unnatural unlike the speech recorded from a close-speaking microphone. This paper addresses the issue of improving the perceptual quality of the throat microphone speech by mapping the speech spectra from the throat microphone to the close-speaking microphone. A neural network model is used to capture the speaker-dependent functional relationship between the feature vectors (cepstral coefficients of the two speech signals. A method is proposed to ensure the stability of the all-pole synthesis filter. Objective evaluations indicate the effectiveness of the proposed mapping scheme. The advantage of this method is that the model gives a smooth estimate of the spectra of the close-speaking microphone speech. No distortions are perceived in the reconstructed speech. This mapping technique is also used for bandwidth extension of telephone speech.

  5. Foreign Subtitles Help but Native-Language Subtitles Harm Foreign Speech Perception

    Science.gov (United States)

    Mitterer, Holger; McQueen, James M.

    2009-01-01

    Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken. PMID:19918371

  6. Evaluation of Sound Quality, Boominess and Boxiness in Small Rooms

    DEFF Research Database (Denmark)

    Weisser, Adam; Rindel, Jens Holger

    2006-01-01

    ratings. The classical bass ratio definitions showed poor correlation with all subjective ratings. The overall sound quality ratings gave different results for speech and music. For speech the preferred mean RT should be as low as possible, whereas for music there was found a preferred range between 0......The acoustics of small rooms has been studied with emphasis on sound quality, boominess and boxiness when the rooms are used for speech or music. Seven rooms with very different characteristics have been used for the study. Subjective listening tests were made using binaural recordings...... of reproduced speech and music. The test results were compared with a large number of objective acoustic parameters based on the frequency-dependent reverberation times measured in the rooms. This has led to the proposal of three new acoustic parameters, which have shown high correlation with the subjective...

  7. Speech and Language Developmental Milestones

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Speech and Language Developmental Milestones On this page: How do speech ... and language developmental milestones? How do speech and language develop? The first 3 years of life, when ...

  8. Delayed Speech or Language Development

    Science.gov (United States)

    ... Safe Videos for Educators Search English Español Delayed Speech or Language Development KidsHealth / For Parents / Delayed Speech ... their child is right on schedule. How Are Speech and Language Different? Speech is the verbal expression ...

  9. Cross-language and second language speech perception

    DEFF Research Database (Denmark)

    Bohn, Ocke-Schwen

    2017-01-01

    in cross-language and second language speech perception research: The mapping issue (the perceptual relationship of sounds of the native and the nonnative language in the mind of the native listener and the L2 learner), the perceptual and learning difficulty/ease issue (how this relationship may or may...... not cause perceptual and learning difficulty), and the plasticity issue (whether and how experience with the nonnative language affects the perceptual organization of speech sounds in the mind of L2 learners). One important general conclusion from this research is that perceptual learning is possible at all...... ages but will be influenced by the entire language learning history of the individual....

  10. Visual speech acts differently than lexical context in supporting speech perception.

    Science.gov (United States)

    Samuel, Arthur G; Lieblich, Jerrold

    2014-08-01

    The speech signal is often badly articulated, and heard under difficult listening conditions. To deal with these problems, listeners make use of various types of context. In the current study, we examine a type of context that in previous work has been shown to affect how listeners report what they hear: visual speech (i.e., the visible movements of the speaker's articulators). Despite the clear utility of this type of context under certain conditions, prior studies have shown that visually driven phonetic percepts (via the "McGurk" effect) are not "real" enough to affect perception of later-occurring speech; such percepts have not produced selective adaptation effects. This failure contrasts with successful adaptation by sounds that are generated by lexical context-the word that a sound occurs within. We demonstrate here that this dissociation is robust, leading to the conclusion that visual and lexical contexts operate differently. We suggest that the dissociation reflects the dual nature of speech as both a perceptual object and a linguistic object. Visual speech seems to contribute directly to the computations of the perceptual object but not the linguistic one, while lexical context is used in both types of computations.

  11. The effects of noise on speech and warning signals

    Science.gov (United States)

    Suter, Alice H.

    1989-06-01

    To assess the effects of noise on speech communication it is necessary to examine certain characteristics of the speech signal. Speech level can be measured by a variety of methods, none of which has yet been standardized, and it should be kept in mind that vocal effort increases with background noise level and with different types of activity. Noise and filtering commonly degrade the speech signal, especially as it is transmitted through communications systems. Intelligibility is also adversely affected by distance, reverberation, and monaural listening. Communication systems currently in use may cause strain and delays on the part of the listener, but there are many possibilities for improvement. Individuals who need to communicate in noise may be subject to voice disorders. Shouted speech becomes progressively less intelligible at high voice levels, but improvements can be realized when talkers use clear speech. Tolerable listening levels are lower for negative than for positive S/Ns, and comfortable listening levels should be at a S/N of at least 5 dB, and preferably above 10 dB. Popular methods to predict speech intelligibility in noise include the Articulation Index, Speech Interference Level, Speech Transmission Index, and the sound level meter's A-weighting network. This report describes these methods, discussing certain advantages and disadvantages of each, and shows their interrelations.

  12. Categorical Perception of Sound Frequency by Crickets

    Science.gov (United States)

    Wyttenbach, Robert A.; May, Michael L.; Hoy, Ronald R.

    1996-09-01

    Partitioning continuously varying stimuli into categories is a fundamental problem of perception. One solution to this problem, categorical perception, is known primarily from human speech, but also occurs in other modalities and in some mammals and birds. Categorical perception was tested in crickets by using two paradigms of human psychophysics, labeling and habituation-dishabituation. The results show that crickets divide sound frequency categorically between attractive (16 kilohertz) sounds. There is sharp discrimination between these categories but no discrimination between different frequencies of ultrasound. This demonstration of categorical perception in an invertebrate suggests that categorical perception may be a basic and widespread feature of sensory systems, from humans to invertebrates.

  13. Speech and Communication Disorders

    Science.gov (United States)

    ... Speech problems like stuttering Developmental disabilities Learning disorders Autism spectrum disorder Brain injury Stroke Some speech and communication problems may be genetic. Often, no one knows the causes. By first grade, about 5 percent of children ...

  14. Speech disorders - children

    Science.gov (United States)

    ... after age 4 (I want...I want my doll. I...I see you.) Putting in (interjecting) extra ... may outgrow milder forms of speech disorders. Speech therapy may help with more severe symptoms or any ...

  15. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  16. The pathways for intelligible speech: multivariate and univariate perspectives.

    Science.gov (United States)

    Evans, S; Kyong, J S; Rosen, S; Golestani, N; Warren, J E; McGettigan, C; Mourão-Miranda, J; Wise, R J S; Scott, S K

    2014-09-01

    An anterior pathway, concerned with extracting meaning from sound, has been identified in nonhuman primates. An analogous pathway has been suggested in humans, but controversy exists concerning the degree of lateralization and the precise location where responses to intelligible speech emerge. We have demonstrated that the left anterior superior temporal sulcus (STS) responds preferentially to intelligible speech (Scott SK, Blank CC, Rosen S, Wise RJS. 2000. Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 123:2400-2406.). A functional magnetic resonance imaging study in Cerebral Cortex used equivalent stimuli and univariate and multivariate analyses to argue for the greater importance of bilateral posterior when compared with the left anterior STS in responding to intelligible speech (Okada K, Rong F, Venezia J, Matchin W, Hsieh IH, Saberi K, Serences JT,Hickok G. 2010. Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech. 20: 2486-2495.). Here, we also replicate our original study, demonstrating that the left anterior STS exhibits the strongest univariate response and, in decoding using the bilateral temporal cortex, contains the most informative voxels showing an increased response to intelligible speech. In contrast, in classifications using local "searchlights" and a whole brain analysis, we find greater classification accuracy in posterior rather than anterior temporal regions. Thus, we show that the precise nature of the multivariate analysis used will emphasize different response profiles associated with complex sound to speech processing. © The Author 2013. Published by Oxford University Press.

  17. A maximum likelihood approach to estimating articulator positions from speech acoustics

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  18. The Use of Electropalatography in the Treatment of Acquired Apraxia of Speech.

    Science.gov (United States)

    Mauszycki, Shannon C; Wright, Sandra; Dingus, Nicole; Wambaugh, Julie L

    2016-12-01

    This investigation was designed to examine the effects of an articulatory-kinematic treatment in conjunction with visual biofeedback (VBFB) via electropalatography (EPG) on the accuracy of articulation for acquired apraxia of speech (AOS). A multiple-baseline design across participants and behaviors was used with 4 individuals with chronic AOS and aphasia. Accuracy of target speech sounds in treated and untreated phrases in probe sessions served as the dependent variable. Participants received an articulatory-kinematic treatment in combination with VBFB, which was sequentially applied to 3 stimulus sets composed of 2-word phrases with a target speech sound for each set. Positive changes in articulatory accuracy were observed for participants for the majority of treated speech sounds. Also, there was generalization to untreated phrases for most trained speech sounds. Two participants had better long-term maintenance of treated speech sounds in both trained and untrained stimuli. Findings indicate EPG may be a potential treatment tool for AOS. It appears that individuals with AOS can benefit from VBFB via EPG in improving articulatory accuracy. However, further research is needed to determine if VBFB is more advantageous than behavioral treatments that have been proven effective in improving speech production for speakers with AOS.

  19. Phonological representations are unconsciously used when processing complex, non-speech signals.

    Directory of Open Access Journals (Sweden)

    Mahan Azadpour

    Full Text Available Neuroimaging studies of speech processing increasingly rely on artificial speech-like sounds whose perceptual status as speech or non-speech is assigned by simple subjective judgments; brain activation patterns are interpreted according to these status assignments. The naïve perceptual status of one such stimulus, spectrally-rotated speech (not consciously perceived as speech by naïve subjects, was evaluated in discrimination and forced identification experiments. Discrimination of variation in spectrally-rotated syllables in one group of naïve subjects was strongly related to the pattern of similarities in phonological identification of the same stimuli provided by a second, independent group of naïve subjects, suggesting either that (1 naïve rotated syllable perception involves phonetic-like processing, or (2 that perception is solely based on physical acoustic similarity, and similar sounds are provided with similar phonetic identities. Analysis of acoustic (Euclidean distances of center frequency values of formants and phonetic similarities in the perception of the vowel portions of the rotated syllables revealed that discrimination was significantly and independently influenced by both acoustic and phonological information. We conclude that simple subjective assessments of artificial speech-like sounds can be misleading, as perception of such sounds may initially and unconsciously utilize speech-like, phonological processing.

  20. Transcribing the Speech of Children with Cochlear Implants: Clinical Application of Narrow Phonetic Transcriptions

    Science.gov (United States)

    Teoh, Amy P.; Chin, Steven B.

    2009-01-01

    Purpose: The phonological systems of children with cochlear implants may include segment inventories that contain both target and nontarget speech sounds. These children may not consistently follow phonological rules of the target language. These issues present a challenge for the clinical speech-language pathologist who uses phonetic…