WorldWideScience

Sample records for left-hemisphere speech dominance

  1. Why the Left Hemisphere Is Dominant for Speech Production: Connecting the Dots

    Directory of Open Access Journals (Sweden)

    Harvey Martin Sussman

    2015-12-01

    Full Text Available Evidence from seemingly disparate areas of speech/language research is reviewed to form a unified theoretical account for why the left hemisphere is specialized for speech production. Research findings from studies investigating hemispheric lateralization of infant babbling, the primacy of the syllable in phonological structure, rhyming performance in split-brain patients, rhyming ability and phonetic categorization in children diagnosed with developmental apraxia of speech, rules governing exchange errors in spoonerisms, organizational principles of neocortical control of learned motor behaviors, and multi-electrode recordings of human neuronal responses to speech sounds are described and common threads highlighted. It is suggested that the emergence, in developmental neurogenesis, of a hard-wired, syllabically-organized, neural substrate representing the phonemic sound elements of one’s language, particularly the vocalic nucleus, is the crucial factor underlying the left hemisphere’s dominance for speech production.

  2. Hemispheric lateralization in an analysis of speech sounds. Left hemisphere dominance replicated in Japanese subjects.

    Science.gov (United States)

    Koyama, S; Gunji, A; Yabe, H; Oiwa, S; Akahane-Yamada, R; Kakigi, R; Näätänen, R

    2000-09-01

    Evoked magnetic responses to speech sounds [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M. Vainio, P. Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.] were recorded from 13 Japanese subjects (right-handed). Infrequently presented vowels ([o]) among repetitive vowels ([e]) elicited the magnetic counterpart of mismatch negativity, MMNm (Bilateral, nine subjects; Left hemisphere alone, three subjects; Right hemisphere alone, one subject). The estimated source of the MMNm was stronger in the left than in the right auditory cortex. The sources were located posteriorly in the left than in the right auditory cortex. These findings are consistent with the results obtained in Finnish [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M.Vainio, P.Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.][T. Rinne, K. Alho, P. Alku, M. Holi, J. Sinkkonen, J. Virtanen, O. Bertrand and R. Näätänen, Analysis of speech sounds is left-hemisphere predominant at 100-150 ms after sound onset. Neuroreport, 10 (1999) 1113-1117.] and English [K. Alho, J.F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko and R.J. Ilmoniemi, Hemispheric lateralization in preattentive processing of speech sounds. Neurosci. Lett., 258 (1998) 9-12.] subjects. Instead of the P1m observed in Finnish [M. Tervaniemi, A. Kujala, K. Alho, J. Virtanen, R.J. Ilmoniemi and R. Näätänen, Functional specialization of the human auditory cortex in processing phonetic and musical sounds: A magnetoencephalographic (MEG) study. Neuroimage, 9 (1999) 330-336.] and English [K. Alho, J. F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko

  3. Beyond Hemispheric Dominance: Brain Regions Underlying the Joint Lateralization of Language and Arithmetic to the Left Hemisphere

    Science.gov (United States)

    Pinel, Philippe; Dehaene, Stanislas

    2010-01-01

    Language and arithmetic are both lateralized to the left hemisphere in the majority of right-handed adults. Yet, does this similar lateralization reflect a single overall constraint of brain organization, such an overall "dominance" of the left hemisphere for all linguistic and symbolic operations? Is it related to the lateralization of specific…

  4. The Influence of Visual and Auditory Information on the Perception of Speech and Non-Speech Oral Movements in Patients with Left Hemisphere Lesions

    Science.gov (United States)

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-01-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…

  5. Language in individuals with left hemisphere tumors: Is spontaneous speech analysis comparable to formal testing?

    Science.gov (United States)

    Rofes, Adrià; Talacchi, Andrea; Santini, Barbara; Pinna, Giampietro; Nickels, Lyndsey; Bastiaanse, Roelien; Miceli, Gabriele

    2018-01-31

    The relationship between spontaneous speech and formal language testing in people with brain tumors (gliomas) has been rarely studied. In clinical practice, formal testing is typically used, while spontaneous speech is less often evaluated quantitatively. However, spontaneous speech is quicker to sample and may be less prone to test/retest effects, making it a potential candidate for assessing language impairments when there is restricted time or when the patient is unable to undertake prolonged testing. To assess whether quantitative spontaneous speech analysis and formal testing detect comparable language impairments in people with gliomas. Specifically, we addressed (a) whether both measures detected comparable language impairments in our patient sample; and (b) which language levels, assessment times, and spontaneous speech variables were more often impaired in this subject group. Five people with left perisylvian gliomas performed a spontaneous speech task and a formal language assessment. Tests were administered before surgery, within a week after surgery, and seven months after surgery. Performance on spontaneous speech was compared with that of 15 healthy speakers. Language impairments were detected more often with both measures than with either measure independently. Lexical-semantic impairments were more common than phonological and grammatical impairments, and performance was equally impaired across assessment time points. Incomplete sentences and phonological paraphasias were the most common error types. In our sample both spontaneous speech analysis and formal testing detected comparable language impairments. Currently, we suggest that formal testing remains overall the better option, except for cases in which there are restrictions on testing time or the patient is too tired to undergo formal testing. In these cases, spontaneous speech may provide a viable alternative, particularly if automated analysis of spontaneous speech becomes more readily

  6. Music and Stroke Rehabilitation: A Narrative Synthesis of the Music-Based Treatments used to Rehabilitate Disorders of Speech and Language following Left-Hemispheric Stroke

    OpenAIRE

    Kevin Draper

    2016-01-01

    Stroke is a leading cause of long-term disability. A stroke can damage areas of the brain associated with communication, resulting in speech and language disorders. Such disorders are frequently acquired impairments from left-hemispheric stroke. Music-based treatments have been implemented, and researched in practice, for the past thirty years; however, the number of published reports reviewing these treatments is limited. This paper uses the four elements of the narrative synthesis framework...

  7. Caffeine improves left hemisphere processing of positive words.

    Science.gov (United States)

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.

  8. Caffeine improves left hemisphere processing of positive words.

    Directory of Open Access Journals (Sweden)

    Lars Kuchinke

    Full Text Available A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.

  9. Apraxia and spatial inattention dissociate in left hemisphere stroke.

    Science.gov (United States)

    Timpert, David C; Weiss, Peter H; Vossel, Simone; Dovern, Anna; Fink, Gereon R

    2015-10-01

    Theories of lateralized cognitive functions propose a dominance of the left hemisphere for motor control and of the right hemisphere for spatial attention. Accordingly, spatial attention deficits (e.g., neglect) are more frequently observed after right-hemispheric stroke, whereas apraxia is a common consequence of left-hemispheric stroke. Clinical reports of spatial attentional deficits after left hemisphere (LH) stroke also exist, but are often neglected. By applying parallel analysis (PA) and voxel-based lesion-symptom mapping (VLSM) to data from a comprehensive neuropsychological assessment of 74 LH stroke patients, we here systematically investigate the relationship between spatial inattention and apraxia and their neural bases. PA revealed that apraxic (and language comprehension) deficits loaded on one common component, while deficits in attention tests were explained by another independent component. Statistical lesion analyses with the individual component scores showed that apraxic (and language comprehension) deficits were significantly associated with lesions of the left superior longitudinal fascicle (SLF). Data suggest that in LH stroke spatial attention deficits dissociate from apraxic (and language comprehension) deficits. These findings contribute to models of lateralised cognitive functions in the human brain. Moreover, our findings strongly suggest that LH stroke patients should be assessed systematically for spatial attention deficits so that these can be included in their rehabilitation regime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A case of expressive-vocal amusia in a right-handed patient with left hemispheric cerebral infarction.

    Science.gov (United States)

    Uetsuki, Shizuka; Kinoshita, Hiroshi; Takahashi, Ryuichi; Obata, Satoshi; Kakigi, Tatsuya; Wada, Yoshiko; Yokoyama, Kazumasa

    2016-03-01

    A 53-year-old right-handed woman had an extensive lesion in the left hemisphere due to an infarction caused by vasospasm secondary to subarachnoid bleeding. She exhibited persistent expressive-vocal amusia with no symptoms of aphasia. Evaluation of the patient's musical competence using the Montreal Battery for Evaluation of Amusia, rhythm reproduction tests, acoustic analysis of pitch upon singing familiar music, Japanese standard language tests, and other detailed clinical examinations revealed that her amusia was more dominantly related to pitch production. The intactness of her speech provided strong evidence that the right hemisphere played a major role in her linguistic processing. Data from functional magnetic resonance imaging while she was singing a familiar song, a scale, and reciting lyrics indicated that perilesional residual activation in the left hemisphere was associated with poor pitch production, while right hemispheric activation was involved in linguistic processing. The localization of infarction more anterior to the left Sylvian fissure might be related to the dominant deficits in expressive aspects of the singing of the patient. Compromised motor programming producing a single tone may have made a major contribution to her poor singing. Imperfect auditory feedback due to borderline perceptual ability or improper audio-motor associations might also have played a role. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Right hemisphere grey matter structure and language outcomes in chronic left hemisphere stroke

    Science.gov (United States)

    Xing, Shihui; Lacey, Elizabeth H.; Skipper-Kallal, Laura M.; Jiang, Xiong; Harris-Love, Michelle L.; Zeng, Jinsheng

    2016-01-01

    The neural mechanisms underlying recovery of language after left hemisphere stroke remain elusive. Although older evidence suggested that right hemisphere language homologues compensate for damage in left hemisphere language areas, the current prevailing theory suggests that right hemisphere engagement is ineffective or even maladaptive. Using a novel combination of support vector regression-based lesion-symptom mapping and voxel-based morphometry, we aimed to determine whether local grey matter volume in the right hemisphere independently contributes to aphasia outcomes after chronic left hemisphere stroke. Thirty-two left hemisphere stroke survivors with aphasia underwent language assessment with the Western Aphasia Battery-Revised and tests of other cognitive domains. High-resolution T1-weighted images were obtained in aphasia patients and 30 demographically matched healthy controls. Support vector regression-based multivariate lesion-symptom mapping was used to identify critical language areas in the left hemisphere and then to quantify each stroke survivor’s lesion burden in these areas. After controlling for these direct effects of the stroke on language, voxel-based morphometry was then used to determine whether local grey matter volumes in the right hemisphere explained additional variance in language outcomes. In brain areas in which grey matter volumes related to language outcomes, we then compared grey matter volumes in patients and healthy controls to assess post-stroke plasticity. Lesion–symptom mapping showed that specific left hemisphere regions related to different language abilities. After controlling for lesion burden in these areas, lesion size, and demographic factors, grey matter volumes in parts of the right temporoparietal cortex positively related to spontaneous speech, naming, and repetition scores. Examining whether domain general cognitive functions might explain these relationships, partial correlations demonstrated that grey matter

  12. Why Are the Right and Left Hemisphere Conceptual Representations Different?

    Directory of Open Access Journals (Sweden)

    Guido Gainotti

    2014-01-01

    Full Text Available The present survey develops a previous position paper, in which I suggested that the multimodal semantic impairment observed in advanced stages of semantic dementia is due to the joint disruption of pictorial and verbal representations, subtended by the right and left anterior temporal lobes, rather than to the loss of a unitary, amodal semantic system. The main goals of the present review are (a to survey a larger set of data, in order to confirm the differences in conceptual representations at the level of the right and left hemispheres, (b to examine if language-mediated information plays a greater role in left hemisphere semantic knowledge than sensory-motor information in right hemisphere conceptual knowledge, and (c to discuss the models that could explain both the differences in conceptual representations at the hemispheric level and the prevalence of the left hemisphere language-mediated semantic knowledge over the right hemisphere perceptually based conceptual representations.

  13. Right Hemisphere and Left Hemisphere: Pedagogical Implications for CSL Reading.

    Science.gov (United States)

    Mickel, Stanley L.

    Students can be taught to read Chinese more efficiently and accurately by using the specific capabilities of the right and left hemispheres of the brain. The right hemisphere is the site of image and pattern recognition, and students can be taught to use those capacities to process individual characters efficiently by watching for the element of…

  14. Right-ear precedence and vocal emotion contagion: The role of the left hemisphere.

    Science.gov (United States)

    Schepman, Astrid; Rodway, Paul; Cornmell, Louise; Smith, Bethany; de Sa, Sabrina Lauren; Borwick, Ciara; Belfon-Thompson, Elisha

    2018-05-01

    Much evidence suggests that the processing of emotions is lateralized to the right hemisphere of the brain. However, under some circumstances the left hemisphere might play a role, particularly for positive emotions and emotional experiences. We explored whether emotion contagion was right-lateralized, lateralized valence-specifically, or potentially left-lateralized. In two experiments, right-handed female listeners rated to what extent emotionally intoned pseudo-sentences evoked target emotions in them. These sound stimuli had a 7 ms ear lead in the left or right channel, leading to stronger stimulation of the contralateral hemisphere. In both experiments, the results revealed that right ear lead stimuli received subtly but significantly higher evocation scores, suggesting a left hemisphere dominance for emotion contagion. A control experiment using an emotion identification task showed no effect of ear lead. The findings are discussed in relation to prior findings that have linked the processing of emotional prosody to left-hemisphere brain regions that regulate emotions, control orofacial musculature, are involved in affective empathy processing areas, or have an affinity for processing emotions socially. Future work is needed to eliminate alternative interpretations and understand the mechanisms involved. Our novel binaural asynchrony method may be useful in future work in auditory laterality.

  15. Developmental dyslexia: dysfunction of a left hemisphere reading network

    Directory of Open Access Journals (Sweden)

    Fabio eRichlan

    2012-05-01

    Full Text Available This mini-review summarizes and integrates findings from recent meta-analyses and original neuroimaging studies on functional brain abnormalities in dyslexic readers. Surprisingly, there is little empirical support for the standard neuroanatomical model of developmental dyslexia, which localizes the primary phonological decoding deficit in left temporo-parietal regions. Rather, recent evidence points to a dysfunction of a left hemisphere reading network, which includes occipito-temporal, inferior frontal, and inferior parietal regions.

  16. Effects of hemisphere speech dominance and seizure focus on patterns of behavioral response errors for three types of stimuli.

    Science.gov (United States)

    Rausch, R; MacDonald, K

    1997-03-01

    We used a protocol consisting of a continuous presentation of stimuli with associated response requests during an intracarotid sodium amobarbital procedure (IAP) to study the effects of hemisphere injected (speech dominant vs. nondominant) and seizure focus (left temporal lobe vs. right temporal lobe) on the pattern of behavioral response errors for three types of visual stimuli (pictures of common objects, words, and abstract forms). Injection of the left speech dominant hemisphere compared to the right nondominant hemisphere increased overall errors and affected the pattern of behavioral errors. The presence of a seizure focus in the contralateral hemisphere increased overall errors, particularly for the right temporal lobe seizure patients, but did not affect the pattern of behavioral errors. Left hemisphere injections disrupted both naming and reading responses at a rate similar to that of matching-to-sample performance. Also, a short-term memory deficit was observed with all three stimuli. Long-term memory testing following the left hemisphere injection indicated that only for pictures of common objects were there fewer errors during the early postinjection period than for the later long-term memory testing. Therefore, despite the inability to respond to picture stimuli, picture items, but not words or forms, could be sufficiently encoded for later recall. In contrast, right hemisphere injections resulted in few errors, with a pattern suggesting a mild general cognitive decrease. A selective weakness in learning unfamiliar forms was found. Our findings indicate that different patterns of behavioral deficits occur following the left vs. right hemisphere injections, with selective patterns specific to stimulus type.

  17. Religion, hate speech, and non-domination

    OpenAIRE

    Bonotti, Matteo

    2017-01-01

    In this paper I argue that one way of explaining what is wrong with hate speech is by critically assessing what kind of freedom free speech involves and, relatedly, what kind of freedom hate speech undermines. More specifically, I argue that the main arguments for freedom of speech (e.g. from truth, from autonomy, and from democracy) rely on a “positive” conception of freedom intended as autonomy and self-mastery (Berlin, 2006), and can only partially help us to understand what is wrong with ...

  18. [Difficulties in face identification after lesion in the left hemisphere].

    Science.gov (United States)

    Verstichel, P; Chia, L

    1999-11-01

    A 82 year-old right-handed man, without any intellectual impairment, suffered from an acute neurological deficit consisting in letter-by-letter reading, right superior quadrant hemianopia with achromatopia in the lower quadrant, and anomia. Cerebral MRI showed an infarct involving the ventral structures of the left hemisphere sparing the splenium of the corpus callosum and the thalamus. Neuropsychological examination revealed that the patient easily identified the objects, the animals and the famous places he could not name: his comments attested normal visual recognition. Conversely, when he was presented with famous faces, he always had a strong feeling of familiarity, but could not provide accurate information about the corresponding individual. Biographic information about personalities was not impaired in the semantic-biographic store, because it could be accessed from the names. Activation of face recognition units (where the visual description provided by the structural encoding and the stored sets of descriptions of familiar faces are compared), was effective, since the patient could distinguish famous faces from unknown ones. In a modular-sequential model of face recognition, this deficit is interpreted as a disconnection between face recognition units and person identity nodes (which are considered to contain semantic-biographic information about individuals). This kind of disturbance differs from classic prosopagnosia in which, characteristically, the patients are unable to experience a feeling of familiarity when viewing famous faces, and to perform a categorization between famous and unknown faces. Right hemisphere has a preponderant role in structural analysis of faces and in activation of face recognition units. The integrity of this hemisphere in this patient could explain the preservation of these two steps of processing. Left-hemisphere specific function in facial recognition enabled access to semantic-biographic store in a conscious, verbal and

  19. Othello syndrome in a patient with two left hemispheric tumors

    Directory of Open Access Journals (Sweden)

    Po-Kuan Yeh

    2016-01-01

    Full Text Available We report a case of a patient with Othello syndrome caused by two left hemispheric tumors. This 50-year-old female had experienced seizures for 10 years and developed manic-like symptoms, delusions of jealousy, persecution and being watched, auditory hallucinations, irritable mood, and violent and disorganized behavior for the past 3 years. Brain imaging studies revealed two left frontal tumors, the larger of which was causing a mass effect. The delusions of jealousy in Othello syndrome resolved after removing the larger tumor, and the other psychiatric symptoms improved after treatment with psychotropic medications. This report aims to raise awareness of Othello syndrome related to disruptions in cortico-subcortical connections in the left orbitofrontal region. Timely surgical treatment may prevent associated psychiatric comorbidities and increase the likelihood of a good outcome.

  20. Phonotactic awareness deficit following left-hemisphere stroke

    Directory of Open Access Journals (Sweden)

    Maryam Ghaleh

    2015-04-01

    Likert-type scale responses were z-transformed and coded accurate for positive z-values in condition 3 and negative z-values in condition 1 trials. Accuracy was analyzed using binomial mixed effects models and z-transformed scale responses were analyzed using linear mixed effects models. For both analyses, the fixed effects of stimulus, trial number, group (patient/control, education, age, response time, phonotactic regularity (1/3, and gender were examined along with all relevant interactions. Random effects for participant and stimuli as well as random slopes were also included. Model fitting was performed in a backward-stepwise iterative fashion, followed by forward fitting of maximal random effects structure. Models were evaluated by model fitness comparisons using Akaike Information Criterion and Bayesian Information Criterion. Accuracy analysis revealed that healthy participants were significantly more accurate than patients [β = 0.47, p<0.001] in Englishness rating. Scale response analysis revealed a significant effect of phonotactic regularity [β = 1.65, p<0.0001] indicating that participants were sensitive to phonotactic regularity differences among non-words. However, the significant interaction of group and phonotactic regularity [β = -0.5, p= 0.02] further demonstrated that, compared to healthy adults, patients were less able to recognize the phonotactic regularity differences between non-words. Results suggest that left-hemisphere lesions cause impaired phonotactic processing and that the left hemisphere might be necessary for phonotactic awareness. These preliminary findings will be followed up by further analyses investigating the interactions between phonotactic processing and participants’ scores on other linguistic/cognitive tasks as well as lesion-symptom mapping.

  1. Efficacy of strategy training in left hemisphere stroke patients with apraxia : A randomised clinical trial

    NARCIS (Netherlands)

    Donkervoort, M; Dekker, J; Stehmann-Saris, FC; Deelman, B. G.

    2001-01-01

    The objective of the present study was to determine in a controlled study the efficacy of strategy training in left hemisphere stroke patients with apraxia. A total of 113 left hemisphere stroke patients with apraxia were randomly assigned to two treatment groups; (1) strategy training integrated

  2. Prevalence of apraxia among patients with a first left hemisphere stroke in rehabilitation centres and nursing homes.

    OpenAIRE

    Donkervoort, M.; Dekker, J.; Ende, E. van den; Stehmann-Saris, J.C.; Deelman, B.G.

    2000-01-01

    OBJECTIVE: To investigate the prevalence of apraxia in patients with a first left hemisphere stroke. SUBJECTS: Left hemisphere stroke patients staying at an inpatient care unit of a rehabilitation centre or nursing home and receiving occupational therapy (n = 600). MEASURES: A short questionnaire on general patient characteristics and stroke-related aspects was completed by occupational therapists for every left hemisphere stroke patient they treated. A diagnosis of apraxia or nonapraxia was ...

  3. Mental Number Line Disruption in a Right-Neglect Patient after a Left-Hemisphere Stroke

    Science.gov (United States)

    Pia, Lorenzo; Corazzini, Luca Latini; Folegatti, Alessia; Gindri, Patrizia; Cauda, Franco

    2009-01-01

    A right-neglect patient with focal left-hemisphere damage to the posterior superior parietal lobe was assessed for numerical knowledge and tested on the bisection of numerical intervals and visual lines. The semantic and verbal knowledge of numbers was preserved, whereas the performance in numerical tasks that strongly emphasize the visuo-spatial…

  4. Left hemisphere EEG coherence in infancy predicts infant declarative pointing and preschool epistemic language.

    Science.gov (United States)

    Kühn-Popp, N; Kristen, S; Paulus, M; Meinhardt, J; Sodian, B

    2016-01-01

    Pointing plays a central role in preverbal communication. While imperative pointing aims at influencing another person's behavior, declarative gestures serve to convey epistemic information and to share interest in an object. Further, the latter are hypothesized to be a precursor ability of epistemic language. So far, little is known about their underlying brain maturation processes. Therefore, the present study investigated the relation between brain maturation processes and the production of imperative and declarative motives as well as epistemic language in N = 32 infants. EEG coherence scores were measured at 14 months, imperative and declarative point production at 15 months and epistemic language at 48 months. Results of correlational analyses suggest distinct behavioral and neural patterns for imperative and declarative pointing, with declarative pointing being associated with the maturation of the left hemisphere. Further, EEG coherence measures of the left hemisphere at 14 months and declarative pointing at 15 months are related to individual differences in epistemic language skills at 48 months, independently of child IQ. In regression analyses, coherence measures of the left hemisphere prove to be the most important predictor of epistemic language skills. Thus, neural processes of the left hemisphere seem particularly relevant to social communication.

  5. The course of apraxia and ADL functioning in left hemisphere stroke patients treated in rehabilitation centres and nursing homes.

    NARCIS (Netherlands)

    Donkervoort, M.; Dekker, J.; Deelman, B.

    2006-01-01

    OBJECTIVE: To study the course of apraxia and daily life functioning (ADL) in left hemisphere stroke patients with apraxia. DESIGN: Prospective cohort study. SETTING: Rehabilitation centres and nursing homes. SUBJECTS: One hundred and eight left hemisphere stroke patients with apraxia, hospitalized

  6. Prevalence of apraxia among patients with a first left hemisphere stroke in rehabilitation centres and nursing homes

    NARCIS (Netherlands)

    Donkervoort, M; Dekker, J; van den Ende, E; Stehmann-Saris, J. C.; Deelman, B. G.

    Objective: To investigate the prevalence of apraxia in patients with a first left hemisphere stroke. Subjects. Left hemisphere stroke patients staying at an inpatient care unit of a rehabilitation centre or nursing home and receiving occupational therapy (n = 600). Measures: A short questionnaire on

  7. Prevalence of apraxia among patients with a first left hemisphere stroke in rehabilitation centres and nursing homes.

    NARCIS (Netherlands)

    Donkervoort, M.; Dekker, J.; Ende, E. van den; Stehmann-Saris, J.C.; Deelman, B.G.

    2000-01-01

    OBJECTIVE: To investigate the prevalence of apraxia in patients with a first left hemisphere stroke. SUBJECTS: Left hemisphere stroke patients staying at an inpatient care unit of a rehabilitation centre or nursing home and receiving occupational therapy (n = 600). MEASURES: A short questionnaire on

  8. Prevalence of apraxia among patients with a first left hemisphere stroke in rehabilitation centres and nursing homes.

    Science.gov (United States)

    Donkervoort, M; Dekker, J; van den Ende, E; Stehmann-Saris, J C; Deelman, B G

    2000-04-01

    To investigate the prevalence of apraxia in patients with a first left hemisphere stroke. Left hemisphere stroke patients staying at an inpatient care unit of a rehabilitation centre or nursing home and receiving occupational therapy (n = 600). A short questionnaire on general patient characteristics and stroke-related aspects was completed by occupational therapists for every left hemisphere stroke patient they treated. A diagnosis of apraxia or nonapraxia was made in every patient, on the basis of a set of clinical criteria. The prevalence of apraxia among 492 first left hemisphere stroke patients in rehabilitation centres was 28% (96/338) and in nursing homes 37% (57/154). No relationship was found between the prevalence of apraxia and age, gender or type of stroke (haemorrhage or infarct). This study shows that approximately one-third of left hemisphere stroke patients has apraxia.

  9. Functional characteristics of developmental dyslexia in left-hemispheric posterior brain regions predate reading onset.

    Science.gov (United States)

    Raschle, Nora Maria; Zuk, Jennifer; Gaab, Nadine

    2012-02-07

    Individuals with developmental dyslexia (DD) show a disruption in posterior left-hemispheric neural networks during phonological processing. Additionally, compensatory mechanisms in children and adults with DD have been located within frontal brain areas. However, it remains unclear when and how differences in posterior left-hemispheric networks manifest and whether compensatory mechanisms have already started to develop in the prereading brain. Here we investigate functional networks during phonological processing in 36 prereading children with a familial risk for DD (n = 18, average age = 66.50 mo) compared with age and IQ-matched controls (n = 18; average age = 65.61 mo). Functional neuroimaging results reveal reduced activation in prereading children with a family-history of DD (FHD(+)), compared with those without (FHD(-)), in bilateral occipitotemporal and left temporoparietal brain regions. This finding corresponds to previously identified hypoactivations in left hemispheric posterior brain regions for school-aged children and adults with a diagnosis of DD. Furthermore, left occipitotemporal and temporoparietal brain activity correlates positively with prereading skills in both groups. Our results suggest that differences in neural correlates of phonological processing in individuals with DD are not a result of reading failure, but are present before literacy acquisition starts. Additionally, no hyperactivation in frontal brain regions was observed, suggesting that compensatory mechanisms for reading failure are not yet present. Future longitudinal studies are needed to determine whether the identified differences may serve as neural premarkers for the early identification of children at risk for DD.

  10. Increased left hemisphere impairment in high-functioning autism: a tract based spatial statistics study.

    Science.gov (United States)

    Perkins, Thomas John; Stokes, Mark Andrew; McGillivray, Jane Anne; Mussap, Alexander Julien; Cox, Ivanna Anne; Maller, Jerome Joseph; Bittar, Richard Garth

    2014-11-30

    There is evidence emerging from Diffusion Tensor Imaging (DTI) research that autism spectrum disorders (ASD) are associated with greater impairment in the left hemisphere. Although this has been quantified with volumetric region of interest analyses, it has yet to be tested with white matter integrity analysis. In the present study, tract based spatial statistics was used to contrast white matter integrity of 12 participants with high-functioning autism or Aspergers syndrome (HFA/AS) with 12 typically developing individuals. Fractional Anisotropy (FA) was examined, in addition to axial, radial and mean diffusivity (AD, RD and MD). In the left hemisphere, participants with HFA/AS demonstrated significantly reduced FA in predominantly thalamic and fronto-parietal pathways and increased RD. Symmetry analyses confirmed that in the HFA/AS group, WM disturbance was significantly greater in the left compared to right hemisphere. These findings contribute to a growing body of literature suggestive of reduced FA in ASD, and provide preliminary evidence for RD impairments in the left hemisphere. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. The course of apraxia and ADL functioning in left hemisphere stroke patients treated in rehabilitation centres and nursing homes.

    OpenAIRE

    Donkervoort, M.; Dekker, J.; Deelman, B.

    2006-01-01

    OBJECTIVE: To study the course of apraxia and daily life functioning (ADL) in left hemisphere stroke patients with apraxia. DESIGN: Prospective cohort study. SETTING: Rehabilitation centres and nursing homes. SUBJECTS: One hundred and eight left hemisphere stroke patients with apraxia, hospitalized in rehabilitation centres and nursing homes. MEASURES: ADL-observations, Barthel ADL Index, Apraxia Test, Motricity Index. RESULTS: During the study period of 20 weeks, patients showed small improv...

  12. Cognitive alterations in motor imagery process after left hemispheric ischemic stroke.

    Directory of Open Access Journals (Sweden)

    Jing Yan

    Full Text Available BACKGROUND: Motor imagery training is a promising rehabilitation strategy for stroke patients. However, few studies had focused on the neural mechanisms in time course of its cognitive process. This study investigated the cognitive alterations after left hemispheric ischemic stroke during motor imagery task. METHODOLOGY/PRINCIPAL FINDINGS: Eleven patients with ischemic stroke in left hemisphere and eleven age-matched control subjects participated in mental rotation task (MRT of hand pictures. Behavior performance, event-related potential (ERP and event-related (desynchronization (ERD/ERS in beta band were analyzed to investigate the cortical activation. We found that: (1 The response time increased with orientation angles in both groups, called "angle effect", however, stoke patients' responses were impaired with significantly longer response time and lower accuracy rate; (2 In early visual perceptual cognitive process, stroke patients showed hypo-activations in frontal and central brain areas in aspects of both P200 and ERD; (3 During mental rotation process, P300 amplitude in control subjects decreased while angle increased, called "amplitude modulation effect", which was not observed in stroke patients. Spatially, patients showed significant lateralization of P300 with activation only in contralesional (right parietal cortex while control subjects showed P300 in both parietal lobes. Stroke patients also showed an overall cortical hypo-activation of ERD during this sub-stage; (4 In the response sub-stage, control subjects showed higher ERD values with more activated cortical areas particularly in the right hemisphere while angle increased, named "angle effect", which was not observed in stroke patients. In addition, stroke patients showed significant lower ERD for affected hand (right response than that for unaffected hand. CONCLUSIONS/SIGNIFICANCE: Cortical activation was altered differently in each cognitive sub-stage of motor imagery after

  13. Lesion characteristics driving right-hemispheric language reorganization in congenital left-hemispheric brain damage.

    Science.gov (United States)

    Lidzba, Karen; de Haan, Bianca; Wilke, Marko; Krägeloh-Mann, Ingeborg; Staudt, Martin

    2017-10-01

    Pre- or perinatally acquired ("congenital") left-hemispheric brain lesions can be compensated for by reorganizing language into homotopic brain regions in the right hemisphere. Language comprehension may be hemispherically dissociated from language production. We investigated the lesion characteristics driving inter-hemispheric reorganization of language comprehension and language production in 19 patients (7-32years; eight females) with congenital left-hemispheric brain lesions (periventricular lesions [n=11] and middle cerebral artery infarctions [n=8]) by fMRI. 16/17 patients demonstrated reorganized language production, while 7/19 patients had reorganized language comprehension. Lesions to the insular cortex and the temporo-parietal junction (predominantly supramarginal gyrus) were significantly more common in patients in whom both, language production and comprehension were reorganized. These areas belong to the dorsal stream of the language network, participating in the auditory-motor integration of language. Our data suggest that the integrity of this stream might be crucial for a normal left-lateralized language development. Copyright © 2017. Published by Elsevier Inc.

  14. Reorganization of syntactic processing following left-hemisphere brain damage: does right-hemisphere activity preserve function?

    Science.gov (United States)

    Tyler, Lorraine K; Wright, Paul; Randall, Billi; Marslen-Wilson, William D; Stamatakis, Emmanuel A

    2010-11-01

    The extent to which the human brain shows evidence of functional plasticity across the lifespan has been addressed in the context of pathological brain changes and, more recently, of the changes that take place during healthy ageing. Here we examine the potential for plasticity by asking whether a strongly left-lateralized system can successfully reorganize to the right-hemisphere following left-hemisphere brain damage. To do this, we focus on syntax, a key linguistic function considered to be strongly left-lateralized, combining measures of tissue integrity, neural activation and behavioural performance. In a functional neuroimaging study participants heard spoken sentences that differentially loaded on syntactic and semantic information. While healthy controls activated a left-hemisphere network of correlated activity including Brodmann areas 45/47 and posterior middle temporal gyrus during syntactic processing, patients activated Brodmann areas 45/47 bilaterally and right middle temporal gyrus. However, voxel-based morphometry analyses showed that only tissue integrity in left Brodmann areas 45/47 was correlated with activity and performance; poor tissue integrity in left Brodmann area 45 was associated with reduced functional activity and increased syntactic deficits. Activity in the right-hemisphere was not correlated with damage in the left-hemisphere or with performance. Reduced neural integrity in the left-hemisphere through brain damage or healthy ageing results in increased right-hemisphere activation in homologous regions to those left-hemisphere regions typically involved in the young. However, these regions do not support the same linguistic functions as those in the left-hemisphere and only indirectly contribute to preserved syntactic capacity. This establishes the unique role of the left hemisphere in syntax, a core component in human language.

  15. Left hemisphere structural connectivity abnormality in pediatric hydrocephalus patients following surgery

    Directory of Open Access Journals (Sweden)

    Weihong Yuan

    2016-01-01

    Full Text Available Neuroimaging research in surgically treated pediatric hydrocephalus patients remains challenging due to the artifact caused by programmable shunt. Our previous study has demonstrated significant alterations in the whole brain white matter structural connectivity based on diffusion tensor imaging (DTI and graph theoretical analysis in children with hydrocephalus prior to surgery or in surgically treated children without programmable shunts. This study seeks to investigate the impact of brain injury on the topological features in the left hemisphere, contratelateral to the shunt placement, which will avoid the influence of shunt artifacts and makes further group comparisons feasible for children with programmable shunt valves. Three groups of children (34 in the control group, 12 in the 3-month post-surgery group, and 24 in the 12-month post-surgery group, age between 1 and 18 years were included in the study. The structural connectivity data processing and analysis were performed based on DTI and graph theoretical analysis. Specific procedures were revised to include only left brain imaging data in normalization, parcellation, and fiber counting from DTI tractography. Our results showed that, when compared to controls, children with hydrocephalus in both the 3-month and 12-month post-surgery groups had significantly lower normalized clustering coefficient, lower small-worldness, and higher global efficiency (all p < 0.05, corrected. At a regional level, both patient groups showed significant alteration in one or more regional connectivity measures in a series of brain regions in the left hemisphere (8 and 10 regions in the 3-month post-surgery and the 12-month post-surgery group, respectively, all p < 0.05, corrected. No significant correlation was found between any of the global or regional measures and the contemporaneous neuropsychological outcomes [the General Adaptive Composite (GAC from the Adaptive Behavior Assessment System, Second

  16. Hemispheric asymmetries in speech perception: sense, nonsense and modulations.

    Directory of Open Access Journals (Sweden)

    Stuart Rosen

    Full Text Available The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding 'rapid temporal processing'.A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET was used to compare which brain regions were active when participants listened to the different sounds.Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features.

  17. Early left-hemispheric dysfunction of face processing in congenital prosopagnosia: an MEG study.

    Directory of Open Access Journals (Sweden)

    Christian Dobel

    Full Text Available BACKGROUND: Congenital prosopagnosia is a severe face perception impairment which is not acquired by a brain lesion and is presumably present from birth. It manifests mostly by an inability to recognise familiar persons. Electrophysiological research has demonstrated the relevance to face processing of a negative deflection peaking around 170 ms, labelled accordingly as N170 in the electroencephalogram (EEG and M170 in magnetoencephalography (MEG. The M170 was shown to be sensitive to the inversion of faces and to familiarity--two factors that are assumed to be crucial for congenital prosopagnosia. In order to locate the cognitive dysfunction and its neural correlates, we investigated the time course of neural activity in response to these manipulations. METHODOLOGY: Seven individuals with congenital prosopagnosia and seven matched controls participated in the experiment. To explore brain activity with high accuracy in time, we recorded evoked magnetic fields (275 channel whole head MEG while participants were looking at faces differing in familiarity (famous vs. unknown and orientation (upright vs. inverted. The underlying neural sources were estimated by means of the least square minimum-norm-estimation (L2-MNE approach. PRINCIPAL FINDINGS: The behavioural data corroborate earlier findings on impaired configural processing in congenital prosopagnosia. For the M170, the overall results replicated earlier findings, with larger occipito-temporal brain responses to inverted than upright faces, and more right- than left-hemispheric activity. Compared to controls, participants with congenital prosopagnosia displayed a general decrease in brain activity, primarily over left occipitotemporal areas. This attenuation did not interact with familiarity or orientation. CONCLUSIONS: The study substantiates the finding of an early involvement of the left hemisphere in symptoms of prosopagnosia. This might be related to an efficient and overused featural

  18. Testing the language of German cerebral palsy patients with right hemispheric language organization after early left hemispheric damage.

    Science.gov (United States)

    Schwilling, Eleonore; Krägeloh-Mann, Ingeborg; Konietzko, Andreas; Winkler, Susanne; Lidzba, Karen

    2012-02-01

    Language functions are generally represented in the left cerebral hemisphere. After early (prenatally acquired or perinatally acquired) left hemispheric brain damage language functions may be salvaged by reorganization into the right hemisphere. This is different from brain lesions acquired in adulthood which normally lead to aphasia. Right hemispheric reorganized language (RL) is not associated with obvious language deficits. In this pilot study we compared a group of German-speaking patients with left hemispheric brain damage and RL with a group of matched healthy controls. The novel combination of reliable language lateralization as assessed by neuroimaging (functional magnetic resonance imaging) and specific linguistic tasks revealed significant differences between patients with RL and healthy controls in both language comprehension and production. Our results provide evidence for the hypothesis that RL is significantly different from normal left hemispheric language. This knowledge can be used to improve counselling of parents and to develop specific therapeutic approaches.

  19. Reorganization of the Cerebro-Cerebellar Network of Language Production in Patients with Congenital Left-Hemispheric Brain Lesions

    Science.gov (United States)

    Lidzba, K.; Wilke, M.; Staudt, M.; Krageloh-Mann, I.; Grodd, W.

    2008-01-01

    Patients with congenital lesions of the left cerebral hemisphere may reorganize language functions into the right hemisphere. In these patients, language production is represented homotopically to the left-hemispheric language areas. We studied cerebellar activation in five patients with congenital lesions of the left cerebral hemisphere to assess…

  20. Testing the Language of German Cerebral Palsy Patients with Right Hemispheric Language Organization after Early Left Hemispheric Damage

    Science.gov (United States)

    Schwilling, Eleonore; Krageloh-Mann, Ingeborg; Konietzko, Andreas; Winkler, Susanne; Lidzba, Karen

    2012-01-01

    Language functions are generally represented in the left cerebral hemisphere. After early (prenatally acquired or perinatally acquired) left hemispheric brain damage language functions may be salvaged by reorganization into the right hemisphere. This is different from brain lesions acquired in adulthood which normally lead to aphasia. Right…

  1. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch.

    Science.gov (United States)

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A; Larson, Charles R

    2014-02-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Left hemisphere structural connectivity abnormality in pediatric hydrocephalus patients following surgery.

    Science.gov (United States)

    Yuan, Weihong; Meller, Artur; Shimony, Joshua S; Nash, Tiffany; Jones, Blaise V; Holland, Scott K; Altaye, Mekibib; Barnard, Holly; Phillips, Jannel; Powell, Stephanie; McKinstry, Robert C; Limbrick, David D; Rajagopal, Akila; Mangano, Francesco T

    2016-01-01

    Neuroimaging research in surgically treated pediatric hydrocephalus patients remains challenging due to the artifact caused by programmable shunt. Our previous study has demonstrated significant alterations in the whole brain white matter structural connectivity based on diffusion tensor imaging (DTI) and graph theoretical analysis in children with hydrocephalus prior to surgery or in surgically treated children without programmable shunts. This study seeks to investigate the impact of brain injury on the topological features in the left hemisphere, contratelateral to the shunt placement, which will avoid the influence of shunt artifacts and makes further group comparisons feasible for children with programmable shunt valves. Three groups of children (34 in the control group, 12 in the 3-month post-surgery group, and 24 in the 12-month post-surgery group, age between 1 and 18 years) were included in the study. The structural connectivity data processing and analysis were performed based on DTI and graph theoretical analysis. Specific procedures were revised to include only left brain imaging data in normalization, parcellation, and fiber counting from DTI tractography. Our results showed that, when compared to controls, children with hydrocephalus in both the 3-month and 12-month post-surgery groups had significantly lower normalized clustering coefficient, lower small-worldness, and higher global efficiency (all p  hydrocephalus surgically treated with programmable shunts.

  3. Dissociation between Semantic Representations for Motion and Action Verbs: Evidence from Patients with Left Hemisphere Lesions.

    Science.gov (United States)

    Taylor, Lawrence J; Evans, Carys; Greer, Joanna; Senior, Carl; Coventry, Kenny R; Ietswaart, Magdalena

    2017-01-01

    This multiple single case study contrasted left hemisphere stroke patients ( N = 6) to healthy age-matched control participants ( N = 15) on their understanding of action (e.g., holding, clenching) and motion verbs (e.g., crumbling, flowing). The tasks required participants to correctly identify the matching verb or associated picture. Dissociations on action and motion verb content depending on lesion site were expected. As predicted for verbs containing an action and/or motion content, modified t -tests confirmed selective deficits in processing motion verbs in patients with lesions involving posterior parietal and lateral occipitotemporal cortex. In contrast, deficits in verbs describing motionless actions were found in patients with more anterior lesions sparing posterior parietal and lateral occipitotemporal cortex. These findings support the hypotheses that semantic representations for action and motion are behaviorally and neuro-anatomically dissociable. The findings clarify the differential and critical role of perceptual and motor regions in processing modality-specific semantic knowledge as opposed to a supportive but not necessary role. We contextualize these results within theories from both cognitive psychology and cognitive neuroscience that make claims over the role of sensory and motor information in semantic representation.

  4. Multi-tasking uncovers right spatial neglect and extinction in chronic left-hemisphere stroke patients.

    Science.gov (United States)

    Blini, Elvio; Romeo, Zaira; Spironelli, Chiara; Pitteri, Marco; Meneghello, Francesca; Bonato, Mario; Zorzi, Marco

    2016-11-01

    Unilateral Spatial Neglect, the most dramatic manifestation of contralesional space unawareness, is a highly heterogeneous syndrome. The presence of neglect is related to core spatially lateralized deficits, but its severity is also modulated by several domain-general factors (such as alertness or sustained attention) and by task demands. We previously showed that a computer-based dual-task paradigm exploiting both lateralized and non-lateralized factors (i.e., attentional load/multitasking) better captures this complex scenario and exacerbates deficits for the contralesional space after right hemisphere damage. Here we asked whether multitasking would reveal contralesional spatial disorders in chronic left-hemisphere damaged (LHD) stroke patients, a population in which impaired spatial processing is thought to be uncommon. Ten consecutive LHD patients with no signs of right-sided neglect at standard neuropsychological testing performed a computerized spatial monitoring task with and without concurrent secondary tasks (i.e., multitasking). Severe contralesional (right) space unawareness emerged in most patients under attentional load in both the visual and auditory modalities. Multitasking affected the detection of contralesional stimuli both when presented concurrently with an ipsilesional one (i.e., extinction for bilateral targets) and when presented in isolation (i.e., left neglect for right-sided targets). No spatial bias emerged in a control group of healthy elderly participants, who performed at ceiling, as well as in a second control group composed of patients with Mild Cognitive Impairment. We conclude that the pathological spatial asymmetry in LHD patients cannot be attributed to a global reduction of cognitive resources but it is the consequence of unilateral brain damage. Clinical and theoretical implications of the load-dependent lack of awareness for contralesional hemispace following LHD are discussed. Copyright © 2016. Published by Elsevier Ltd.

  5. Dissociations of action means and outcome processing in left-hemisphere stroke.

    Science.gov (United States)

    Kalénine, Solène; Shapiro, Allison D; Buxbaum, Laurel J

    2013-06-01

    Previous evidence suggests that distinct fronto-parietal regions may be involved in representing action kinematics (means) and action results (outcome) during action observation. However, the evidence is contradictory with respect to the precise regions that are critical for each type of representation. Additionally unknown is the degree to which ability to detect action means and outcome during observation is related to action production performance. We used a behavioral task to evaluate the ability of healthy and left-hemisphere stroke participants to detect differences between pairs of videos that dissociated object-related action means (e.g., wiping with circular or straight movement) and/or outcome (e.g., applying or removing detergent). We expected that deficits in detecting action means would be associated with spatiomotor gesture production deficits, whereas deficits in detecting action outcome would predict impairments in complex naturalistic action. We also hypothesized a posterior to anterior gradient in the regions critical for each type of representation, disproportionately affecting means and outcome encoding, respectively. Results indicated that outcome--but not means--detection predicted naturalistic action performance in stroke participants. Regression and voxel lesion-symptom mapping analyses of lesion data revealed that means--but not outcome--coding relies on the integrity of the left inferior parietal lobe, whereas no selective critical brain region could be identified for outcome detection. Thus, means and outcome representations are dissociable at both the behavioral and neuroanatomical levels. Furthermore, the data are consistent with a degree of parallelism between action perception and production tasks. Finally, they reinforce the evidence for a critical role of the left inferior parietal lobule in the representation of action means, whereas action outcome may rely on a more distributed neural circuit. Copyright © 2013 Elsevier Ltd. All

  6. Hemispheric lateralization of linguistic prosody recognition in comparison to speech and speaker recognition.

    Science.gov (United States)

    Kreitewolf, Jens; Friederici, Angela D; von Kriegstein, Katharina

    2014-11-15

    an inter-hemispheric mechanism which exploits both a right-hemispheric sensitivity to pitch information and a left-hemispheric dominance in speech processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Estradiol levels during the menstrual cycle differentially affect latencies to right and left hemispheres during dichotic listening: an ERP study.

    Science.gov (United States)

    Tillman, Gail D

    2010-02-01

    Many behavioral studies have found high-estrogen phases of the menstrual cycle to be associated with enhanced left-hemisphere processing and low-estrogen phases to be associated with better right-hemisphere processing. This study examined the changing of hemispheric asymmetry during the menstrual cycle by analyzing event-related potential (ERP) data from midline and both hemispheres of 23 women during their performance of a dichotic tasks shown to elicit a left-hemisphere response (semantic categorization) and a right-hemisphere response (complex tones). Each woman was tested during her high-estrogen follicular phase and low-estrogen menstrual phase. Salivary assays of estradiol and progesterone were used to confirm cycle phase. Analyses of the ERP data revealed that latency for each hemisphere was differentially affected by phase and target side, such that latencies to the left hemisphere and from the right ear were shorter during the high-estrogen phase, and latencies to the right hemisphere and from the left ear were shorter during the low-estrogen phase. These findings supply electrophysiological correlates of the cyclically based interhemispheric differences evinced by behavioral studies. 2009 Elsevier Ltd. All rights reserved.

  8. You may now kiss the bride: Interpretation of social situations by individuals with right or left hemisphere injury.

    Science.gov (United States)

    Baldo, Juliana V; Kacinik, Natalie A; Moncrief, Amber; Beghin, Francesca; Dronkers, Nina F

    2016-01-08

    While left hemisphere damage (LHD) has been clearly shown to cause a range of language impairments, patients with right hemisphere damage (RHD) also exhibit communication deficits, such as difficulties processing prosody, discourse, and social contexts. In the current study, individuals with RHD and LHD were directly compared on their ability to interpret what a character in a cartoon might be saying or thinking, in order to better understand the relative role of the right and left hemisphere in social communication. The cartoon stimuli were manipulated so as to elicit more or less formulaic responses (e.g., a scene of a couple being married by a priest vs. a scene of two people talking, respectively). Participants' responses were scored by blind raters on how appropriately they captured the gist of the social situation, as well as how formulaic and typical their responses were. Results showed that RHD individuals' responses were rated as significantly less appropriate than controls and were also significantly less typical than controls and individuals with LHD. Individuals with RHD produced a numerically lower proportion of formulaic expressions than controls, but this difference was only a trend. Counter to prediction, the pattern of performance across participant groups was not affected by how constrained/formulaic the social situation was. The current findings expand our understanding of the roles that the right and left hemispheres play in social processing and communication and have implications for the potential treatment of social communication deficits in individuals with RHD. Published by Elsevier Ltd.

  9. [Dynamics of functional MRI and speech function in patients after resection of frontal and temporal lobe tumors].

    Science.gov (United States)

    Buklina, S B; Batalov, A I; Smirnov, A S; Poddubskaya, A A; Pitskhelauri, D I; Kobyakov, G L; Zhukov, V Yu; Goryaynov, S A; Kulikov, A S; Ogurtsova, A A; Golanov, A V; Varyukhina, M D; Pronin, I N

    2017-01-01

    There are no studies on application of functional MRI (fMRI) for long-term monitoring of the condition of patients after resection of frontal and temporal lobe tumors. The study purpose was to correlate, using fMRI, reorganization of the speech system and dynamics of speech disorders in patients with left hemisphere gliomas before surgery and in the early and late postoperative periods. A total of 20 patients with left hemisphere gliomas were dynamically monitored using fMRI and comprehensive neuropsychological testing. The tumor was located in the frontal lobe in 12 patients and in the temporal lobe in 8 patients. Fifteen patients underwent primary surgery; 5 patients had repeated surgery. Sixteen patients had WHO Grade II and Grade III gliomas; the others had WHO Grade IV gliomas. Nineteen patients were examined preoperatively; 20 patients were examined at different times after surgery. Speech functions were assessed by a Luria's test; the dominant hand was determined using the Annette questionnaire; a family history of left-handedness was investigated. Functional MRI was performed on an HDtx 3.0 T scanner using BrainWavePA 2.0, Z software for fMRI data processing program for all calculations >7, pfrontal lobe tumors than in those with temporal lobe tumors. No additional activation foci in the left hemisphere were found at the thresholds used to process fMRI data. Recovery of the speech function, to a certain degree, occurred in all patients, but no clear correlation with fMRI data was found. Complex fMRI and neuropsychological studies in 20 patients after resection of frontal and temporal lobe tumors revealed individual features of speech system reorganization within one year follow-up. Probably, activation of right-sided homologues of the speech areas in the presence of left hemisphere tumors depends not only on the severity of speech disorder but also reflects individual involvement of the right hemisphere in enabling speech function. This is confirmed by

  10. The course of apraxia and ADL functioning in left hemisphere stroke patients treated in rehabilitation centres and nursing homes.

    Science.gov (United States)

    Donkervoort, Mireille; Dekker, Joost; Deelman, Betto

    2006-12-01

    To study the course of apraxia and daily life functioning (ADL) in left hemisphere stroke patients with apraxia. Prospective cohort study. Rehabilitation centres and nursing homes. One hundred and eight left hemisphere stroke patients with apraxia, hospitalized in rehabilitation centres and nursing homes. ADL-observations, Barthel ADL Index, Apraxia Test, Motricity Index. During the study period of 20 weeks, patients showed small improvements in apraxia (standardized mean differences of 0.19 and 0.33) and medium-sized improvements in ADL functioning (standardized mean differences from 0.37 to 0.61). About 88% of the patients were still apraxic at week 20. Less improvement in apraxia was observed in initially less severe apraxic patients. Less improvement in ADL functioning was found to be associated with more severe apraxia, a more independent initial ADL score, higher age, impaired motor functioning and longer time between stroke and first assessment. Apraxia in stroke patients is a persistent disorder, which has an adverse influence on ADL recovery.

  11. Sensory-motor transformations for speech occur bilaterally.

    Science.gov (United States)

    Cogan, Gregory B; Thesen, Thomas; Carlson, Chad; Doyle, Werner; Devinsky, Orrin; Pesaran, Bijan

    2014-03-06

    Historically, the study of speech processing has emphasized a strong link between auditory perceptual input and motor production output. A kind of 'parity' is essential, as both perception- and production-based representations must form a unified interface to facilitate access to higher-order language processes such as syntax and semantics, believed to be computed in the dominant, typically left hemisphere. Although various theories have been proposed to unite perception and production, the underlying neural mechanisms are unclear. Early models of speech and language processing proposed that perceptual processing occurred in the left posterior superior temporal gyrus (Wernicke's area) and motor production processes occurred in the left inferior frontal gyrus (Broca's area). Sensory activity was proposed to link to production activity through connecting fibre tracts, forming the left lateralized speech sensory-motor system. Although recent evidence indicates that speech perception occurs bilaterally, prevailing models maintain that the speech sensory-motor system is left lateralized and facilitates the transformation from sensory-based auditory representations to motor-based production representations. However, evidence for the lateralized computation of sensory-motor speech transformations is indirect and primarily comes from stroke patients that have speech repetition deficits (conduction aphasia) and studies using covert speech and haemodynamic functional imaging. Whether the speech sensory-motor system is lateralized, like higher-order language processes, or bilateral, like speech perception, is controversial. Here we use direct neural recordings in subjects performing sensory-motor tasks involving overt speech production to show that sensory-motor transformations occur bilaterally. We demonstrate that electrodes over bilateral inferior frontal, inferior parietal, superior temporal, premotor and somatosensory cortices exhibit robust sensory-motor neural

  12. Asymmetry of temporal auditory T-complex: right ear-left hemisphere advantage in Tb timing in children.

    Science.gov (United States)

    Bruneau, Nicole; Bidet-Caulet, Aurélie; Roux, Sylvie; Bonnet-Brilhault, Frédérique; Gomot, Marie

    2015-02-01

    To investigate brain asymmetry of the temporal auditory evoked potentials (T-complex) in response to monaural stimulation in children compared to adults. Ten children (7 to 9 years) and ten young adults participated in the study. All were right-handed. The auditory stimuli used were tones (1100 Hz, 70 dB SPL, 50 ms duration) delivered monaurally (right, left ear) at four different levels of stimulus onset asynchrony (700-1100-1500-3000 ms). Latency and amplitude of responses were measured at left and right temporal sites according to the ear stimulated. Peaks of the three successive deflections (Na-Ta-Tb) of the T-complex were greater in amplitude and better defined in children than in adults. Amplitude measurements in children indicated that Na culminates on the left hemisphere whatever the ear stimulated whereas Ta and Tb culminate on the right hemisphere but for left ear stimuli only. Peak latency displayed different patterns of asymmetry. Na and Ta displayed shorter latencies for contralateral stimulation. The original finding was that Tb peak latency was the shortest at the left temporal site for right ear stimulation in children. Amplitude increased and/or peak latency decreased with increasing SOA, however no interaction effect was found with recording site or with ear stimulated. Our main original result indicates a right ear-left hemisphere timing advantage for Tb peak in children. The Tb peak would therefore be a good candidate as an electrophysiological marker of ear advantage effects during dichotic stimulation and of functional inter-hemisphere interactions and connectivity in children. Copyright © 2014. Published by Elsevier B.V.

  13. Mapping nouns and finite verbs in left hemisphere tumors: a direct electrical stimulation study.

    Science.gov (United States)

    Rofes, Adrià; Spena, Giannantonio; Talacchi, Andrea; Santini, Barbara; Miozzo, Antonio; Miceli, Gabriele

    2017-04-01

    Neurosurgical mapping studies with nouns and finite verbs are scarce and subcortical data are nonexistent. We used a new task that uses finite verbs in six Italian-speaking patients with gliomas in the left language-dominant hemisphere. Language-relevant positive areas were detected only with nouns in four patients, with both tasks yet in distinct cortical areas in one patient, and only with finite verbs in another patient. Positive areas and types of errors varied across participants. Finite verbs provide complementary information to nouns, and permit more accurate mapping of language production when nouns are unaffected by electrical stimulation.

  14. Changes in regional cerebral blood flow in the right cortex homologous to left language areas are directly affected by left hemispheric damage in aphasic stroke patients: evaluation by Tc-ECD SPECT and novel analytic software.

    Science.gov (United States)

    Uruma, G; Kakuda, W; Abo, M

    2010-03-01

    The objective of this study was to clarify the influence of regional cerebral blood flow (rCBF) changes in language-relevant areas of the dominant hemisphere on rCBF in each region in the non-dominant hemisphere in post-stroke aphasic patients. The study subjects were 27 aphasic patients who suffered their first symptomatic stroke in the left hemisphere. In each subject, we measured rCBF by means of 99mTc-ethylcysteinate dimmer single photon emission computed tomography (SPECT). The SPECT images were analyzed by the statistical imaging analysis programs easy Z-score Imaging System (eZIS) and voxel-based stereotactic extraction estimation (vbSEE). Segmented into Brodmann Area (BA) levels, Regions of Interest (ROIs) were set in language-relevant areas bilaterally, and changes in the relative rCBF as average negative and positive Z-values were computed fully automatically. To assess the relationship between rCBF changes of each ROIs in the left and right hemispheres, the Spearman ranked correlation analysis and stepwise multiple regression analysis were applied. Globally, a negative and asymmetric influence of rCBF changes in the language-relevant areas of the dominant hemisphere on the right hemisphere was found. The rCBF decrease in left BA22 significantly influenced the rCBF increase in right BA39, BA40, BA44 and BA45. The results suggested that the chronic increase in rCBF in the right language-relevant areas is due at least in part to reduction in the trancallosal inhibitory activity of the language-dominant left hemisphere caused by the stroke lesion itself and that these relationships are not always symmetric.

  15. Hemispheric speech lateralisation in the developing brain is related to motor praxis ability.

    Science.gov (United States)

    Hodgson, Jessica C; Hirst, Rebecca J; Hudson, John M

    2016-12-01

    Commonly displayed functional asymmetries such as hand dominance and hemispheric speech lateralisation are well researched in adults. However there is debate about when such functions become lateralised in the typically developing brain. This study examined whether patterns of speech laterality and hand dominance were related and whether they varied with age in typically developing children. 148 children aged 3-10 years performed an electronic pegboard task to determine hand dominance; a subset of 38 of these children also underwent functional Transcranial Doppler (fTCD) imaging to derive a lateralisation index (LI) for hemispheric activation during speech production using an animation description paradigm. There was no main effect of age in the speech laterality scores, however, younger children showed a greater difference in performance between their hands on the motor task. Furthermore, this between-hand performance difference significantly interacted with direction of speech laterality, with a smaller between-hand difference relating to increased left hemisphere activation. This data shows that both handedness and speech lateralisation appear relatively determined by age 3, but that atypical cerebral lateralisation is linked to greater performance differences in hand skill, irrespective of age. Results are discussed in terms of the common neural systems underpinning handedness and speech lateralisation. Copyright © 2016. Published by Elsevier Ltd.

  16. THE IMPACT OF LEFT HEMISPHERE STROKE ON FORCE CONTROL WITH FAMILIAR AND NOVEL OBJECTS: NEUROANATOMIC SUBSTRATES AND RELATIONSHIP TO APRAXIA

    Science.gov (United States)

    Dawson, Amanda M.; Buxbaum, Laurel J.; Duff, Susan V.

    2010-01-01

    Fingertip force scaling for lifting objects frequently occurs in anticipation of finger contact. An ongoing question concerns the types of memories that are used to inform predictive control. Object-specific information such as weight may be stored and retrieved when previously encountered objects are lifted again. Alternatively, visual size and shape cues may provide estimates of object density each time objects are encountered. We reasoned that differences in performance with familiar versus novel objects would provide support for the former possibility. Anticipatory force production with both familiar and novel objects was assessed in 6 left hemisphere stroke patients, 2 of whom exhibited deficient actions with familiar objects (ideomotor apraxia; IMA), along with 5 control subjects. In contrast to healthy controls and stroke participants without IMA, participants with IMA displayed poor anticipatory scaling with familiar objects. However, like the other groups, IMA participants learned to differentiate fingertip forces with repeated lifts of both familiar and novel objects. Finally, there was a significant correlation between damage to the inferior parietal and superior and middle temporal lobes, and impaired anticipatory control for familiar objects. These data support the hypotheses that anticipatory control during lifts of familiar objects in IMA patients are based on object-specific memories, and that the ventro-dorsal stream is involved in the long-term storage of internal models used for anticipatory scaling during object manipulation. PMID:19945445

  17. Activity levels in the left hemisphere caudate-fusiform circuit predict how well a second language will be learned.

    Science.gov (United States)

    Tan, Li Hai; Chen, Lin; Yip, Virginia; Chan, Alice H D; Yang, Jing; Gao, Jia-Hong; Siok, Wai Ting

    2011-02-08

    How second language (L2) learning is achieved in the human brain remains one of the fundamental questions of neuroscience and linguistics. Previous neuroimaging studies with bilinguals have consistently shown overlapping cortical organization of the native language (L1) and L2, leading to a prediction that a common neurobiological marker may be responsible for the development of the two languages. Here, by using functional MRI, we show that later skills to read in L2 are predicted by the activity level of the fusiform-caudate circuit in the left hemisphere, which nonetheless is not predictive of the ability to read in the native language. We scanned 10-y-old children while they performed a lexical decision task on L2 (and L1) stimuli. The subjects' written language (reading) skills were behaviorally assessed twice, the first time just before we performed the fMRI scan (time 1 reading) and the second time 1 y later (time 2 reading). A whole-brain based analysis revealed that activity levels in left caudate and left fusiform gyrus correlated with L2 literacy skills at time 1. After controlling for the effects of time 1 reading and nonverbal IQ, or the effect of in-scanner lexical performance, the development in L2 literacy skills (time 2 reading) was also predicted by activity in left caudate and fusiform regions that are thought to mediate language control functions and resolve competition arising from L1 during L2 learning. Our findings suggest that the activity level of left caudate and fusiform regions serves as an important neurobiological marker for predicting accomplishment in reading skills in a new language.

  18. Electrophysiological evidence for the action of a center-surround mechanism on semantic processing in the left hemisphere

    Directory of Open Access Journals (Sweden)

    Diana eDeacon

    2013-12-01

    Full Text Available Physiological evidence was sought for a center-surround attentional mechanism (CSM, which has been proposed to assist in the retrieval of weakly activated items from semantic memory. The CSM operates by facilitating strongly related items in the center of the weakly activated area of semantic memory, and inhibiting less strongly related items in its surround. In this study weak activation was created by having subjects acquire the meanings of new words to a recall criterion of only 50%. Subjects who attained this approximate criterion level of performance were subsequently included in a semantic priming task, during which ERPs were recorded. Primes were newly learned rare words, and targets were either synonyms, nonsynonymously related words, or unrelated words. All stimuli were presented to the RVF/LH (right visual field/left hemisphere or the LVF/RH (left visual field/right hemisphere. Under RVF/LH stimulation the newly learned word primes produced facilitation on N400 for synonym targets, and inhibition for related targets. No differences were observed under LVF/RH stimulation. The LH thus, supports a CSM, whereby a synonym in the center of attention focused on the newly learned word is facilitated, whereas a related word in the surround is inhibited. The data are consistent with the view of this laboratory that semantic memory is subserved by a spreading activation system in the LH. Also consistent with our view, there was no evidence of spreading activation in the RH. The findings are discussed in the context of additional recent theories of semantic memory. Finally, the adult right hemisphere may require more learning than the LH in order to demonstrate evidence of meaning acquisition.

  19. Speech and the right hemisphere.

    Science.gov (United States)

    Critchley, E M

    1991-01-01

    Two facts are well recognized: the location of the speech centre with respect to handedness and early brain damage, and the involvement of the right hemisphere in certain cognitive functions including verbal humour, metaphor interpretation, spatial reasoning and abstract concepts. The importance of the right hemisphere in speech is suggested by pathological studies, blood flow parameters and analysis of learning strategies. An insult to the right hemisphere following left hemisphere damage can affect residual language abilities and may activate non-propositional inner speech. The prosody of speech comprehension even more so than of speech production-identifying the voice, its affective components, gestural interpretation and monitoring one's own speech-may be an essentially right hemisphere task. Errors of a visuospatial type may occur in the learning process. Ease of learning by actors and when learning foreign languages is achieved by marrying speech with gesture and intonation, thereby adopting a right hemisphere strategy.

  20. Speech and the Right Hemisphere

    Directory of Open Access Journals (Sweden)

    E. M. R. Critchley

    1991-01-01

    Full Text Available Two facts are well recognized: the location of the speech centre with respect to handedness and early brain damage, and the involvement of the right hemisphere in certain cognitive functions including verbal humour, metaphor interpretation, spatial reasoning and abstract concepts. The importance of the right hemisphere in speech is suggested by pathological studies, blood flow parameters and analysis of learning strategies. An insult to the right hemisphere following left hemisphere damage can affect residual language abilities and may activate non-propositional inner speech. The prosody of speech comprehension even more so than of speech production—identifying the voice, its affective components, gestural interpretation and monitoring one's own speech—may be an essentially right hemisphere task. Errors of a visuospatial type may occur in the learning process. Ease of learning by actors and when learning foreign languages is achieved by marrying speech with gesture and intonation, thereby adopting a right hemisphere strategy.

  1. Greater freedom of speech on Web 2.0 correlates with dominance of views linking vaccines to autism.

    Science.gov (United States)

    Venkatraman, Anand; Garg, Neetika; Kumar, Nilay

    2015-03-17

    It is suspected that Web 2.0 web sites, with a lot of user-generated content, often support viewpoints that link autism to vaccines. We assessed the prevalence of the views supporting a link between vaccines and autism online by comparing YouTube, Google and Wikipedia with PubMed. Freedom of speech is highest on YouTube and progressively decreases for the others. Support for a link between vaccines and autism is most prominent on YouTube, followed by Google search results. It is far lower on Wikipedia and PubMed. Anti-vaccine activists use scientific arguments, certified physicians and official-sounding titles to gain credibility, while also leaning on celebrity endorsement and personalized stories. Online communities with greater freedom of speech lead to a dominance of anti-vaccine voices. Moderation of content by editors can offer balance between free expression and factual accuracy. Health communicators and medical institutions need to step up their activity on the Internet. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Augmenting Melodic Intonation Therapy with non-invasive brain stimulation to treat impaired left-hemisphere function: two case studies

    Directory of Open Access Journals (Sweden)

    Shahd eAl-Janabi

    2014-02-01

    Full Text Available The purpose of this study was to investigate whether or not the right hemisphere can be engaged using Melodic Intonation Therapy (MIT and excitatory repetitive transcranial magnetic stimulation (rTMS to improve language function in people with aphasia. The two participants in this study (GOE and AMC have chronic non-fluent aphasia. A functional Magnetic Resonance Imaging (fMRI task was used to localize the right Broca’s homologue area in the inferior frontal gyrus for rTMS coil placement. The treatment protocol included an rTMS phase, which consisted of 3 treatment sessions that used an excitatory stimulation method known as intermittent theta burst stimulation, and a sham-rTMS phase, which consisted of 3 treatment sessions that used a sham coil. Each treatment session was followed by 40 minutes of MIT. A linguistic battery was administered after each session. Our findings show that one participant, GOE, improved in verbal fluency and the repetition of phrases treated with MIT in combination with TMS. However, AMC showed no evidence of behavioural benefit from this brief treatment trial. Post-treatment neural activity changes were observed for both participants in the left Broca’s area and right Broca’s homologue. These case studies indicate that a combination of rTMS applied to the right Broca’s homologue and MIT has the potential to improve speech and language outcomes for at least some people with post-stroke aphasia.

  3. Augmenting melodic intonation therapy with non-invasive brain stimulation to treat impaired left-hemisphere function: two case studies.

    Science.gov (United States)

    Al-Janabi, Shahd; Nickels, Lyndsey A; Sowman, Paul F; Burianová, Hana; Merrett, Dawn L; Thompson, William F

    2014-01-01

    The purpose of this study was to investigate whether or not the right hemisphere can be engaged using Melodic Intonation Therapy (MIT) and excitatory repetitive transcranial magnetic stimulation (rTMS) to improve language function in people with aphasia. The two participants in this study (GOE and AMC) have chronic non-fluent aphasia. A functional Magnetic Resonance Imaging (fMRI) task was used to localize the right Broca's homolog area in the inferior frontal gyrus for rTMS coil placement. The treatment protocol included an rTMS phase, which consisted of 3 treatment sessions that used an excitatory stimulation method known as intermittent theta burst stimulation, and a sham-rTMS phase, which consisted of 3 treatment sessions that used a sham coil. Each treatment session was followed by 40 min of MIT. A linguistic battery was administered after each session. Our findings show that one participant, GOE, improved in verbal fluency and the repetition of phrases when treated with MIT in combination with TMS. However, AMC showed no evidence of behavioral benefit from this brief treatment trial. Post-treatment neural activity changes were observed for both participants in the left Broca's area and right Broca's homolog. These case studies indicate that a combination of MIT and rTMS applied to the right Broca's homolog has the potential to improve speech and language outcomes for at least some people with post-stroke aphasia.

  4. Processing of basic speech acts following localized brain damage: a new light on the neuroanatomy of language.

    Science.gov (United States)

    Soroker, Nachum; Kasher, Asa; Giora, Rachel; Batori, Gila; Corn, Cecilia; Gil, Mali; Zaidel, Eran

    2005-03-01

    We examined the effect of localized brain lesions on processing of the basic speech acts (BSAs) of question, assertion, request, and command. Both left and right cerebral damage produced significant deficits relative to normal controls, and left brain damaged patients performed worse than patients with right-sided lesions. This finding argues against the common conjecture that the right hemisphere of most right-handers plays a dominant role in natural language pragmatics. In right-hemisphere damaged patients, there was no correlation between location and extent of lesion in perisylvian cortex and performance on BSAs. By contrast, processing of the different BSAs by left hemisphere-damaged patients was strongly affected by perisylvian lesion location, with each BSA showing a distinct pattern of localization. This finding raises the possibility that the classical left perisylvian localization of language functions, as measured by clinical aphasia batteries, partly reflects the localization of the BSAs required to perform these functions.

  5. Automatic segmentation of short association bundles using a new multi-subject atlas of the left hemisphere fronto-parietal brain connections.

    Science.gov (United States)

    Guevara, M; Seguel, D; Roman, C; Duclap, D; Lebois, A; Le Bihan; Mangin, J-F; Poupon, C; Guevara, P

    2015-08-01

    Human brain connection map is far from being complete. In particular the study of the superficial white matter (SWM) is an unachieved task. Its description is essential for the understanding of human brain function and the study of the pathogenesis associated to it. In this work we developed a method for the automatic creation of a SWM bundle multi-subject atlas. The atlas generation method is based on a cortical parcellation for the extraction of fibers connecting two different gyri. Then, an intra-subject fiber clustering is applied, in order to divide each bundle into sub-bundles with similar shape. After that, a two-step inter-subject fiber clustering is used in order to find the correspondence between the sub-bundles across the subjects, fuse similar clusters and discard the outliers. The method was applied to 40 subjects of a high quality HARDI database, focused on the left hemisphere fronto-parietal and insula brain regions. We obtained an atlas composed of 44 bundles connecting 22 pair of ROIs. Then the atlas was used to automatically segment 39 new subjects from the database.

  6. Feasibility of the cognitive assessment scale for stroke patients (CASP) vs. MMSE and MoCA in aphasic left hemispheric stroke patients.

    Science.gov (United States)

    Barnay, J-L; Wauquiez, G; Bonnin-Koang, H Y; Anquetil, C; Pérennou, D; Piscicelli, C; Lucas-Pineau, B; Muja, L; le Stunff, E; de Boissezon, X; Terracol, C; Rousseaux, M; Bejot, Y; Binquet, C; Antoine, D; Devilliers, H; Benaim, C

    2014-01-01

    Post-stroke aphasia makes it difficult to assess cognitive deficiencies. We thus developed the CASP, which can be administered without using language. Our objective was to compare the feasibility of the CASP, the Mini Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA) in aphasic stroke patients. All aphasic patients consecutively admitted to seven French rehabilitation units during a 4-month period after a recent first left hemispheric stroke were assessed with CASP, MMSE and MoCA. We determined the proportion of patients in whom it was impossible to administer at least one item from these 3 scales, and compared their administration times. Forty-four patients were included (age 64±15, 26 males). The CASP was impossible to administer in eight of them (18%), compared with 16 for the MMSE (36%, P=0.05) and 13 for the MoCA (30%, P=0.21, NS). It was possible to administer the CASP in all of the patients with expressive aphasia, whereas the MMSE and the MoCA could not be administered. Administration times were longer for the CASP (13±4min) than for the MMSE (8±3min, P<10(-6)) and the MoCA (11±5min, P=0.23, NS). The CASP is more feasible than the MMSE and the MoCA in aphasic stroke patients. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  7. Reward-system effect and “left hemispheric unbalance”: a comparison between drug addiction and high-BAS healthy subjects on gambling behavior

    Directory of Open Access Journals (Sweden)

    Roberta Finocchiaro

    2015-04-01

    Full Text Available Recent studies show the similarity of reward-related neurocircuitry and behavioral patterns between pathological gamblers and substance addictive patients. Evidences proved that pathological gambling (PG and Substance Use Disorders (SUD are associated with deficits in frontal lobe function and that they show similar behaviors to that of patients with bilateral VMPFC lesions. The present article aimed to compare the results of two studies concerning the relationship between the Behavioral Activation System (BAS and the hemispheric lateralisation effect that supports the gambling behavior in addiction disease. In the two studies we considered a group of Cocaine Addictive (CA patients and high-BAS healthy subjects who were tested using the Iowa Gambling Task. Also metacognitive questionary and alpha band modulation were considered. It was found that the “left hemisphere unbalance” may be considered as a critical marker of dysfunctional decision-making in addictive behaviors (drug addiction and gambling behaviours and a factor able to explain the tendency to opt in favor of more reward-related conditions.

  8. You talkin' to me? Communicative talker gaze activates left-lateralized superior temporal cortex during perception of degraded speech.

    Science.gov (United States)

    McGettigan, Carolyn; Jasmin, Kyle; Eisner, Frank; Agnew, Zarinah K; Josephs, Oliver J; Calder, Andrew J; Jessop, Rosemary; Lawson, Rebecca P; Spielmann, Mona; Scott, Sophie K

    2017-06-01

    Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes' responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze - further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Right Ear Advantage of Speech Audiometry in Single-sided Deafness.

    Science.gov (United States)

    Wettstein, Vincent G; Probst, Rudolf

    2018-04-01

    Postlingual single-sided deafness (SSD) is defined as normal hearing in one ear and severely impaired hearing in the other ear. A right ear advantage and dominance of the left hemisphere are well established findings in individuals with normal hearing and speech processing. Therefore, it seems plausible that a right ear advantage would exist in patients with SSD. The audiometric database was searched to identify patients with SSD. Results from the German monosyllabic Freiburg word test and four-syllabic number test in quiet were evaluated. Results of right-sided SSD were compared with left-sided SSD. Statistical calculations were done with the Mann-Whitney U test. Four hundred and six patients with SSD were identified, 182 with right-sided and 224 with left-sided SSD. The two groups had similar pure-tone thresholds without significant differences. All test parameters of speech audiometry had better values for right ears (SSD left) when compared with left ears (SSD right). Statistically significant results (p right and 97.5 ± 4.7% left, p right and 93.9 ± 9.1% left, p right and 63.8 ± 11.1 dB SPL left, p right ear advantage of speech audiometry was found in patients with SSD in this retrospective study of audiometric test results.

  10. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  11. Altered resting-state network connectivity in stroke patients with and without apraxia of speech

    OpenAIRE

    New, Anneliese B.; Robin, Donald A.; Parkinson, Amy L.; Duffy, Joseph R.; McNeil, Malcom R.; Piguet, Olivier; Hornberger, Michael; Price, Cathy J.; Eickhoff, Simon B.; Ballard, Kirrie J.

    2015-01-01

    Motor speech disorders, including apraxia of speech (AOS), account for over 50% of the communication disorders following stroke. Given its prevalence and impact, and the need to understand its neural mechanisms, we used resting state functional MRI to examine functional connectivity within a network of regions previously hypothesized as being associated with AOS (bilateral anterior insula (aINS), inferior frontal gyrus (IFG), and ventral premotor cortex (PM)) in a group of 32 left hemisphere ...

  12. Continuing Inequity through Neoliberalism: The Conveyance of White Dominance in the Educational Policy Speeches of President Barack Obama

    Science.gov (United States)

    Hairston, Thomas W.

    2013-01-01

    The purpose of this critical discourse analysis is to examine how the political speeches and statements of President Barack Obama knowingly or unknowingly continue practices and policies of White privilege within educational policy and practice by constructing education in a neoliberal frame. With presidents having the ability to communicate…

  13. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    Science.gov (United States)

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole

  14. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    Science.gov (United States)

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  15. Infiltration of the basal ganglia by brain tumors is associated with the development of co-dominant language function on fMRI.

    Science.gov (United States)

    Shaw, Katharina; Brennan, Nicole; Woo, Kaitlin; Zhang, Zhigang; Young, Robert; Peck, Kyung K; Holodny, Andrei

    2016-01-01

    Studies have shown that some patients with left-hemispheric brain tumors have an increased propensity for developing right-sided language support. However, the precise trigger for establishing co-dominant language function in brain tumor patients remains unknown. We analyzed the MR scans of patients with left-hemispheric tumors and either co-dominant (n=35) or left-hemisphere dominant (n=35) language function on fMRI to investigate anatomical factors influencing hemispheric language dominance. Of eleven neuroanatomical areas evaluated for tumor involvement, the basal ganglia was significantly correlated with co-dominant language function (planguage co-dominance performed significantly better on the Boston Naming Test, a clinical measure of aphasia, compared to their left-lateralized counterparts (56.5 versus 36.5, p=0.025). While further studies are needed to elucidate the role of the basal ganglia in establishing co-dominance, our results suggest that reactive co-dominance may afford a behavioral advantage to patients with left-hemispheric tumors. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Use of prosodic cues in the production of idiomatic and literal sentences by individuals with right- and left-hemisphere damage.

    Science.gov (United States)

    Bélanger, Nathalie; Baum, Shari R; Titone, Debra

    2009-07-01

    The neural bases of prosody during the production of literal and idiomatic interpretations of literally plausible idioms was investigated. Left- and right-hemisphere-damaged participants and normal controls produced literal and idiomatic versions of idioms (He hit the books.) All groups modulated duration to distinguish the interpretations. LHD patients, however, showed typical speech timing difficulties. RHD patients did not differ from the normal controls. The results partially support a differential lateralization of prosodic cues in the two cerebral hemispheres [Van Lancker, D., & Sidtis, J. J. (1992). The identification of affective-prosodic stimuli by left- and right-hemisphere-damaged subjects: All errors are not created equal. Journal of Speech and Hearing Research, 35, 963-970]. Furthermore, extended final word lengthening appears to mark idiomaticity.

  17. Verbal Interactional Dominance and Coordinative Structure of Speech Rhythms of Staff and Clients with an Intellectual Disability

    NARCIS (Netherlands)

    Reuzel, Ellen; Embregts, Petri J. C. M.; Bosman, Anna M. T.; Cox, Ralf F. A.; van Nieuwenhuijzen, Maroesjka; Jahoda, Andrew

    2014-01-01

    Social interactions between staff and clients with an intellectual disability contain synchronized turn-taking patterns. Synchrony can increase rapport and cooperation between individuals. This study investigated whether verbal interactional dominance and balance, an indication of attunement between

  18. TMS produces two dissociable types of speech disruption.

    Science.gov (United States)

    Stewart, L; Walsh, V; Frith, U; Rothwell, J C

    2001-03-01

    We aimed to use repetitive transcranial magnetic stimulation (rTMS) to disrupt speech with the specific objective of dissociating speech disruption according to whether or not it was associated with activation of the mentalis muscle. Repetitive transcranial magnetic stimulation (rTMS) was applied over two sites of the right and left hemisphere while subjects counted aloud and recited the days of the week, months of the year, and nursery rhymes. Analysis of EMG data and videotaped recordings showed that rTMS applied over a posterior site, lateral to the motor hand area of both the right and the left hemisphere resulted in speech disruption that was accompanied by activation of the mentalis muscle, while rTMS applied over an anterior site on the left but not the right hemisphere resulted in speech disruption that was dissociated from activation of the mentalis muscle. The findings provide a basis for the use of subthreshold stimulation over the extrarolandic speech disruption site in order to probe the functional properties of this area and to test psychological theories of linguistic function. Copyright 2001 Academic Press.

  19. Efficacy of melody-based aphasia therapy may strongly depend on rhythm and conversational speech formulas

    OpenAIRE

    Benjamin Stahl

    2014-01-01

    Left-hemisphere stroke patients suffering from language and speech disorders are often able to sing entire pieces of text fluently. This finding has inspired a number of melody-based rehabilitation programs – most notable among them a treatment known as Melodic Intonation Therapy – as well as two fundamental research questions. When the experimental design focuses on one point in time (cross section), one may determine whether or not singing has an immediate effect on syllable production in p...

  20. Speech Entrainment Compensates for Broca's Area Damage

    Science.gov (United States)

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-01-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to speech entrainment. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during speech entrainment versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of speech entrainment to improve speech production and may help select patients for speech entrainment treatment. PMID:25989443

  1. On the dual and paradoxical role of media: Messengers of the dominant ideology and vehicles of disruptive speech

    Directory of Open Access Journals (Sweden)

    José Rebelo

    2014-11-01

    Full Text Available This article aims to evaluate the dual function exercised by traditional media - TV, radio and press – as a place of ideological production, assuming the power of communication as a method of naturalization, and as a place of confrontation, giving voice to alternative projects. First, the function of ideological production, in regards to the national and international media coverage of the financial crisis in Portugal, warrants consideration. Furthermore, the role of media confrontation is illuminated by the coverage of protests in Portugal and Brazil. Concluding, if traditional media convey dominant norms and hierarchies, notwithstanding the pressure on social networks, this mode indicates a deviation, thus contributing, even if indirectly, to a redefinition of people and culture.

  2. Right-ear advantage drives the link between olivocochlear efferent 'antimasking' and speech-in-noise listening benefits.

    Science.gov (United States)

    Bidelman, Gavin M; Bhagat, Shaum P

    2015-05-27

    The mammalian cochlea receives feedback from the brainstem medial olivocochlear (MOC) efferents, whose putative 'antimasking' function is to adjust cochlear amplification and enhance peripheral signal detection in adverse listening environments. Human studies have been inconsistent in demonstrating a clear connection between this corticofugal system and behavioral speech-in-noise (SIN) listening skills. To elucidate the role of brainstem efferent activity in SIN perception, we measured ear-specific contralateral suppression of transient-evoked otoacoustic emissions (OAEs), a proxy measure of MOC activation linked to auditory learning in noisy environments. We show that suppression of cochlear emissions is stronger with a more basal cochlear bias in the right ear compared with the left ear. Moreover, a strong negative correlation was observed between behavioral SIN performance and right-ear OAE suppression magnitudes, such that lower speech reception thresholds in noise were predicted by larger amounts of MOC-related activity. This brain-behavioral relation was not observed for left ear SIN perception. The rightward bias in contralateral MOC suppression of OAEs, coupled with the stronger association between physiological and perceptual measures, is consistent with left-hemisphere cerebral dominance for speech-language processing. We posit that corticofugal feedback from the left cerebral cortex through descending MOC projections sensitizes the right cochlea to signal-in-noise detection, facilitating figure-ground contrast and improving degraded speech analysis. Our findings demonstrate that SIN listening is at least partly driven by subcortical brain mechanisms; primitive stages of cochlear processing and brainstem MOC modulation of (right) inner ear mechanics play a critical role in dictating SIN understanding.

  3. Functional lateralization of speech processing in adults and children who stutter

    Directory of Open Access Journals (Sweden)

    Yutaka eSato

    2011-04-01

    Full Text Available Developmental stuttering is a speech disorder in fluency characterized by repetitions, prolongations and silent blocks, especially in the initial parts of utterances. Although their symptoms are motor related, people who stutter show abnormal patterns of cerebral hemispheric dominance in both anterior and posterior language areas. It is unknown whether the abnormal functional lateralization in the posterior language area starts during childhood or emerges as a consequence of many years of stuttering. In order to address this issue, we measured the lateralization of hemodynamic responses in the auditory cortex during auditory speech processing in adults and children who stutter, including preschoolers, with near-infrared spectroscopy (NIRS. We used the analysis-resynthesis technique to prepare two types of stimuli: (i a phonemic contrast embedded in Japanese spoken words (/itta/ vs. /itte/ and (ii a prosodic contrast (/itta/ vs. /itta?/. In the baseline blocks, only /itta/ tokens were presented. In phonemic contrast blocks, /itta/ and /itte/ tokens were presented pseudo-randomly, and /itta/ and /itta?/ tokens in prosodic contrast blocks. In adults and children who do not stutter, there was a clear left-hemispheric advantage for the phonemic contrast compared to the prosodic contrast. Adults and children who stutter, however, showed no significant difference between the two stimulus conditions. A subject-by-subject analysis revealed that not a single subject who stutters showed a left advantage in the phonemic contrast over the prosodic contrast condition. These results indicate that the functional lateralization for auditory speech processing is in disarray among those who stutter, even at preschool age. These results shed light on the neural pathophysiology of developmental stuttering.

  4. BUT IS IT SPEECH? MAKING CRITICAL SENSE OF THE DOMINANT CONSTITUTIONAL DISCOURSE ON PORNOGRAPHY, MORALITY AND HARM UNDER THE PERVASIVE INFLUENCE OF UNITED STATES FIRST AMENDMENT JURISPRUDENCE

    Directory of Open Access Journals (Sweden)

    Letetia van der Poll

    2012-08-01

    that “non-obscene” sexually explicit material has social value, as do esteemed works of literature and art. Secondly, the court assumes that all individuals have equal access to the means of expression and dissemination of ideas and thus fails to acknowledge substantive (and gendered structural inequalities. A closer inspection reveals that the Supreme Court’s justification of why freedom of expression is such a fundamental freedom in a constitutional democracy (and the reason that “non-obscene” sexually explicit material consequently enjoys constitutional protection is highly suspect, both intellectually and philosophically. And yet the South African Constitutional Court has explicitly recognised the same philosophical justification as the basis for free speech and expression. The Constitutional Court has, in fact, both supported and emphasised the idea that freedom of expression stands central to the concepts of democracy and political transformation through participation, and has expressly confirmed the association between freedom of expression and the political rights safeguarded under the Bill of Rights. Moreover, the Constitutional Court has also endorsed the conception of adult gender-specific sexually explicit material as a form of free expression. And yet by embracing a moralistic, libertarian model of free expression, the very ideal of a free, democratic and equal society, one in which women can live secure from the threat of harm, is put at risk. A moralistic, libertarian model is simply not capable of conceptualising sexually explicit material as a possible violation of women’s fundamental interests in equality, dignity and physical integrity.This article has a two-fold objective. The first is to critically examine the dominant discourse on adult gender-specific sexually explicit material emanating from United States jurisprudence (and its resonance in South African constitutional thought, and secondly, to assess whether this particular

  5. Patterns of poststroke brain damage that predict speech production errors in apraxia of speech and aphasia dissociate.

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-06-01

    Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions on whether AOS emerges from a unique pattern of brain damage or as a subelement of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The AOS Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with both AOS and aphasia. Localized brain damage was identified using structural magnetic resonance imaging, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS or aphasia, and brain damage. The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS or aphasia were associated with damage to the temporal lobe and the inferior precentral frontal regions. AOS likely occurs in conjunction with aphasia because of the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. © 2015 American Heart Association, Inc.

  6. Evidence for Plasticity in White Matter Tracts of Chronic Aphasic Patients Undergoing Intense Intonation-based Speech Therapy

    Science.gov (United States)

    Schlaug, Gottfried; Marchina, Sarah; Norton, Andrea

    2009-01-01

    Recovery from aphasia can be achieved through recruitment of either peri-lesional brain regions in the affected hemisphere or homologous language regions in the non-lesional hemisphere. For patients with large left-hemisphere lesions, recovery through the right hemisphere may be the only possible path. The right hemisphere regions most likely to play a role in this recovery process are the superior temporal lobe (important for auditory feedback control), premotor regions/posterior inferior frontal gyrus (important for planning and sequencing of motor actions and for auditory-motor mapping) and the primary motor cortex (important for execution of vocal motor actions). These regions are connected reciprocally via a major fiber tract called the arcuate fasciculus (AF), but this tract is usually not as well developed in the non-dominant right hemisphere. We tested whether an intonation-based speech therapy (i.e., Melodic Intonation Therapy) which is typically administered in an intense fashion with 75–80 daily therapy sessions, would lead to changes in white matter tracts, particularly the AF. Using diffusion tensor imaging (DTI), we found a significant increase in the number of AF fibers and AF volume comparing post with pre-treatment assessments in 6 patients that could not be attributed to scan-to-scan variability. This suggests that intense, long-term Melodic Intonation Therapy leads to remodeling of the right AF and may provide an explanation for the sustained therapy effects that were seen in these 6 patients. PMID:19673813

  7. Post-stroke pure apraxia of speech - A rare experience.

    Science.gov (United States)

    Polanowska, Katarzyna Ewa; Pietrzyk-Krawczyk, Iwona

    Apraxia of speech (AOS) is a motor speech disorder, most typically caused by stroke, which in its "pure" form (without other speech-language deficits) is very rare in clinical practice. Because some observable characteristics of AOS overlap with more common verbal communication neurologic syndromes (i.e. aphasia, dysarthria) distinguishing them may be difficult. The present study describes AOS in a 49-year-old right-handed male after left-hemispheric stroke. Analysis of his articulatory and prosodic abnormalities in the context of intact communicative abilities as well as description of symptoms dynamics over time provides valuable information for clinical diagnosis of this specific disorder and prognosis for its recovery. This in turn is the basis for the selection of appropriate rehabilitative interventions. Copyright © 2016 Polish Neurological Society. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  8. Getting the Cocktail Party Started: Masking Effects in Speech Perception.

    Science.gov (United States)

    Evans, Samuel; McGettigan, Carolyn; Agnew, Zarinah K; Rosen, Stuart; Scott, Sophie K

    2016-03-01

    Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous fMRI, while they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioral task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment; activity was found within right lateralized frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.

  9. Perceptually Salient Sound Distortions and Apraxia of Speech: A Performance Continuum.

    Science.gov (United States)

    Haley, Katarina L; Jacks, Adam; Richardson, Jessica D; Wambaugh, Julie L

    2017-06-22

    We sought to characterize articulatory distortions in apraxia of speech and aphasia with phonemic paraphasia and to evaluate the diagnostic validity of error frequency of distortion and distorted substitution in differentiating between these disorders. Study participants were 66 people with speech sound production difficulties after left-hemisphere stroke or trauma. They were divided into 2 groups on the basis of word syllable duration, which served as an external criterion for speaking rate in multisyllabic words and an index of likely speech diagnosis. Narrow phonetic transcriptions were completed for audio-recorded clinical motor speech evaluations, using 29 diacritic marks. Partial voicing and altered vowel tongue placement were common in both groups, and changes in consonant manner and place were also observed. The group with longer word syllable duration produced significantly more distortion and distorted-substitution errors than did the group with shorter word syllable duration, but variations were distributed on a performance continuum that overlapped substantially between groups. Segment distortions in focal left-hemisphere lesions can be captured with a customized set of diacritic marks. Frequencies of distortions and distorted substitutions are valid diagnostic criteria for apraxia of speech, but further development of quantitative criteria and dynamic performance profiles is necessary for clinical utility.

  10. SPEECH AND LANGUAGE DISORDERS IN PATIENTS WITH CVA COMPARISON WITH CT SCAN

    Directory of Open Access Journals (Sweden)

    A CHIT SAZ

    2001-09-01

    Full Text Available Introduction. Stroke is a sudden onset of neurologic signs as a result of the ischemic or intracranial hemmorhage because of the cerebrovascular disease that stands for at least 24 hours. Cerebrovascular disease is one of the most important factors that causes speech disorder. The aim of this study is to show the characteristics of speech and language related to the various parts of the brain lesion. Methods. In this study 64 patients with CVA and speech disorders were tested. Lesions in 36 patients were ischemic, 17 patients were hemmorhagic. 11 patients had no any significant lesion on CT-scan. The test unndertaken included "Farsi Aphasia Test" written by Dr. Nilipoor. Results. Fifty percent of patients were in 61-70 years old group. 70.3 percent were male and 29.7 percent were female. In aspect of the hemisphere involved 50 percent were left hemisphere and 28.1 percent Right hemisphere and 4.7 percent with both hemisphers involvement. Discussion. In CVA patient with speech disorders the temporal lobe of the left hemisphere is mostly involved, and in respect to oral speech disorders in CVA, most of the problems were seen in non verbal fluency and the least problems were seen in repetition.

  11. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  12. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  13. Speech production: Wernicke, Broca and beyond.

    Science.gov (United States)

    Blank, S Catrin; Scott, Sophie K; Murphy, Kevin; Warburton, Elizabeth; Wise, Richard J S

    2002-08-01

    We investigated the brain systems engaged during propositional speech (PrSp) and two forms of non- propositional speech (NPrSp): counting and reciting overlearned nursery rhymes. Bilateral cerebral and cerebellar regions were involved in the motor act of articulation, irrespective of the type of speech. Three additional, left-lateralized regions, adjacent to the Sylvian sulcus, were activated in common: the most posterior part of the supratemporal plane, the lateral part of the pars opercularis in the posterior inferior frontal gyrus and the anterior insula. Therefore, both NPrSp and PrSp were dependent on the same discrete subregions of the anatomically ill-defined areas of Wernicke and Broca. PrSp was also dependent on a predominantly left-lateralized neural system distributed between multi-modal and amodal regions in posterior inferior parietal, anterolateral and medial temporal and medial prefrontal cortex. The lateral prefrontal and paracingulate cortical activity observed in previous studies of cued word retrieval was not seen with either NPrSp or PrSp, demonstrating that normal brain- language representations cannot be inferred from explicit metalinguistic tasks. The evidence from this study indicates that normal communicative speech is dependent on a number of left hemisphere regions remote from the classic language areas of Wernicke and Broca. Destruction or disconnection of discrete left extrasylvian and perisylvian cortical regions, rather than the total extent of damage to perisylvian cortex, will account for the qualitative and quantitative differences in the impaired speech production observed in aphasic stroke patients.

  14. How to engage the right brain hemisphere in aphasics without even singing: evidence for two paths of speech recovery

    OpenAIRE

    Stahl, Benjamin; Henseler, Ilona; Turner, Robert; Geyer, Stefan; Kotz, Sonja A.

    2013-01-01

    There is an ongoing debate as to whether singing helps left-hemispheric stroke patients recover from non-fluent aphasia through stimulation of the right hemisphere. According to recent work, it may not be singing itself that aids speech production in non-fluent aphasic patients, but rhythm and lyric type. However, the long-term effects of melody and rhythm on speech recovery are largely unknown. In the current experiment, we tested 15 patients with chronic non-fluent aphasia who underwent eit...

  15. Evidence for plasticity in white-matter tracts of patients with chronic Broca's aphasia undergoing intense intonation-based speech therapy.

    Science.gov (United States)

    Schlaug, Gottfried; Marchina, Sarah; Norton, Andrea

    2009-07-01

    Recovery from aphasia can be achieved through recruitment of either perilesional brain regions in the affected hemisphere or homologous language regions in the nonlesional hemisphere. For patients with large left-hemisphere lesions, recovery through the right hemisphere may be the only possible path. The right-hemisphere regions most likely to play a role in this recovery process are the superior temporal lobe (important for auditory feedback control), premotor regions/posterior inferior frontal gyrus (important for planning and sequencing of motor actions and for auditory-motor mapping), and the primary motor cortex (important for execution of vocal motor actions). These regions are connected reciprocally via a major fiber tract called the arcuate fasciculus (AF), however, this tract is not as well developed in the right hemisphere as it is in the dominant left. We tested whether an intonation-based speech therapy (i.e., melodic intonation therapy [MIT]), which is typically administered in an intense fashion with 75-80 daily therapy sessions, would lead to changes in white-matter tracts, particularly the AF. Using diffusion tensor imaging (DTI), we found a significant increase in the number of AF fibers and AF volume comparing post- with pretreatment assessments in six patients that could not be attributed to scan-to-scan variability. This suggests that intense, long-term MIT leads to remodeling of the right AF and may provide an explanation for the sustained therapy effects that were seen in these six patients.

  16. Lateralization for speech predicts therapeutic response to cognitive behavioral therapy for depression.

    Science.gov (United States)

    Kishon, Ronit; Abraham, Karen; Alschuler, Daniel M; Keilp, John G; Stewart, Jonathan W; McGrath, Patrick J; Bruder, Gerard E

    2015-08-30

    A prior study (Bruder, G.E., Stewart, J.W., Mercier, M.A., Agosti, V., Leite, P., Donovan, S., Quitkin, F.M., 1997. Outcome of cognitive-behavioral therapy for depression: relation of hemispheric dominance for verbal processing. Journal of Abnormal Psychology 106, 138-144.) found left hemisphere advantage for verbal dichotic listening was predictive of clinical response to cognitive behavioral therapy (CBT) for depression. This study aimed to confirm this finding and to examine the value of neuropsychological tests, which have shown promise for predicting antidepressant response. Twenty depressed patients who subsequently completed 14 weeks of CBT and 74 healthy adults were tested on a Dichotic Fused Words Test (DFWT). Patients were also tested on the National Adult Reading Test to estimate IQ, and word fluency, choice RT, and Stroop neuropsychological tests. Left hemisphere advantage on the DFWT was more than twice as large in CBT responders as in non-responders, and was associated with improvement in depression following treatment. There was no difference between responders and non-responders on neuropsychological tests. The results support the hypothesis that the ability of individuals with strong left hemisphere dominance to recruit frontal and temporal cortical regions involved in verbal dichotic listening predicts CBT response. The large effect size, sensitivity and specificity of DFWT predictions suggest the potential value of this brief and inexpensive test as an indicator of whether a patient will benefit from CBT for depression. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Neural Oscillations Carry Speech Rhythm through to Comprehension.

    Science.gov (United States)

    Peelle, Jonathan E; Davis, Matthew H

    2012-01-01

    A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners' processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging - particularly electroencephalography (EEG) and magnetoencephalography (MEG) - point to phase locking by ongoing cortical oscillations to low-frequency information (~4-8 Hz) in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables) are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and segment perception (i.e., that the perception of phonemes and words in connected speech is influenced by preceding speech rate). Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in differential recruitment of left-hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low-frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain.

  18. Patterns of Post-Stroke Brain Damage that Predict Speech Production Errors in Apraxia of Speech and Aphasia Dissociate

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-01-01

    Background and Purpose Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions regarding if AOS emerges from a unique pattern of brain damage or as a sub-element of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Methods Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The Apraxia of Speech Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with AOS and/or aphasia. Localized brain damage was identified using structural MRI, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS and/or aphasia, and brain damage. Results The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS and/or aphasia were associated with damage to the temporal lobe and the inferior pre-central frontal regions. Conclusion AOS likely occurs in conjunction with aphasia due to the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. PMID:25908457

  19. Non-invasive mapping of bilateral motor speech areas using navigated transcranial magnetic stimulation and functional magnetic resonance imaging.

    Science.gov (United States)

    Könönen, Mervi; Tamsi, Niko; Säisänen, Laura; Kemppainen, Samuli; Määttä, Sara; Julkunen, Petro; Jutila, Leena; Äikiä, Marja; Kälviäinen, Reetta; Niskanen, Eini; Vanninen, Ritva; Karjalainen, Pasi; Mervaala, Esa

    2015-06-15

    Navigated transcranial magnetic stimulation (nTMS) is a modern precise method to activate and study cortical functions noninvasively. We hypothesized that a combination of nTMS and functional magnetic resonance imaging (fMRI) could clarify the localization of functional areas involved with motor control and production of speech. Navigated repetitive TMS (rTMS) with short bursts was used to map speech areas on both hemispheres by inducing speech disruption during number recitation tasks in healthy volunteers. Two experienced video reviewers, blinded to the stimulated area, graded each trial offline according to possible speech disruption. The locations of speech disrupting nTMS trials were overlaid with fMRI activations of word generation task. Speech disruptions were produced on both hemispheres by nTMS, though there were more disruptive stimulation sites on the left hemisphere. Grade of the disruptions varied from subjective sensation to mild objectively recognizable disruption up to total speech arrest. The distribution of locations in which speech disruptions could be elicited varied among individuals. On the left hemisphere the locations of disturbing rTMS bursts with reviewers' verification followed the areas of fMRI activation. Similar pattern was not observed on the right hemisphere. The reviewer-verified speech disruptions induced by nTMS provided clinically relevant information, and fMRI might explain further the function of the cortical area. nTMS and fMRI complement each other, and their combination should be advocated when assessing individual localization of speech network. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Hypothalamic digoxin, hemispheric dominance, and neurobiology of love and affection.

    Science.gov (United States)

    Kurup, Ravi Kumar; Kurup, Parameswara Achutha

    2003-05-01

    The human hypothalamus produces an endogenous membrane Na+-K+ ATPase inhibitor, digoxin, which can regulate neuronal transmission. The digoxin status and neurotransmitter patterns were studied in individuals with a predilection to fall in love. It was also studied in individuals with differing hemispheric dominance to find out the role of cerebral dominance in this respect. In individuals with a predilection to fall in love there was decreased digoxin synthesis, increased membrane Na+-K+ ATPase activity, decreased tryptophan catabolites (serotonin, quinolinic acid, and nicotine), and increased tyrosine catabolites (dopamine, noradrenaline, and morphine). This pattern correlated with that obtained in left hemispheric chemical dominance. Hemispheric dominance and hypothalamic digoxin could regulate the predisposition to fall in love.

  1. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Abstract. Information is carried in changes of a signal. The paper starts with revis- iting Dudley's concept of the carrier nature of speech. It points to its close connection to modulation spectra of speech and argues against short-term spectral envelopes as dominant carriers of the linguistic information in speech. The history of ...

  2. A predictive model for diagnosing stroke-related apraxia of speech.

    Science.gov (United States)

    Ballard, Kirrie J; Azizi, Lamiae; Duffy, Joseph R; McNeil, Malcolm R; Halaki, Mark; O'Dwyer, Nicholas; Layfield, Claire; Scholl, Dominique I; Vogel, Adam P; Robin, Donald A

    2016-01-29

    Diagnosis of the speech motor planning/programming disorder, apraxia of speech (AOS), has proven challenging, largely due to its common co-occurrence with the language-based impairment of aphasia. Currently, diagnosis is based on perceptually identifying and rating the severity of several speech features. It is not known whether all, or a subset of the features, are required for a positive diagnosis. The purpose of this study was to assess predictor variables for the presence of AOS after left-hemisphere stroke, with the goal of increasing diagnostic objectivity and efficiency. This population-based case-control study involved a sample of 72 cases, using the outcome measure of expert judgment on presence of AOS and including a large number of independently collected candidate predictors representing behavioral measures of linguistic, cognitive, nonspeech oral motor, and speech motor ability. We constructed a predictive model using multiple imputation to deal with missing data; the Least Absolute Shrinkage and Selection Operator (Lasso) technique for variable selection to define the most relevant predictors, and bootstrapping to check the model stability and quantify the optimism of the developed model. Two measures were sufficient to distinguish between participants with AOS plus aphasia and those with aphasia alone, (1) a measure of speech errors with words of increasing length and (2) a measure of relative vowel duration in three-syllable words with weak-strong stress pattern (e.g., banana, potato). The model has high discriminative ability to distinguish between cases with and without AOS (c-index=0.93) and good agreement between observed and predicted probabilities (calibration slope=0.94). Some caution is warranted, given the relatively small sample specific to left-hemisphere stroke, and the limitations of imputing missing data. These two speech measures are straightforward to collect and analyse, facilitating use in research and clinical settings. Copyright

  3. Changes in rem dream content during the night: implications for a hypothesis about changes in cerebral dominance across rem periods.

    Science.gov (United States)

    Cohen, D B

    1977-06-01

    REM dream content was scored for categories suggesting the predominant influence of the left hemisphere, e.g., good ego functioning, verbalization, or the right hemisphere, e.g., music, spatial salience, bizarreness. Data from 5 samples of college men showed consistent evidence of an increase in the prominence of left-, but not right-, related categories from earlier to later REM periods. These data suggest there is an increase in left hemisphere control/dominance across the REM periods during the night. Two sets of predictions based on this hypothesis (using more direct estimates of the hypothesized change) yielded supportive evidence. First, as predicted, there was a positive relation between change in percentage of right eye movement (R%) and (a) temporal position of the REM period and (b) change in left-related categories; greater R% was associated with later REM periods and with more prominent left- (but not right-) hemisphere categories. Second, as predicted, there was a positive relation between the diminution of the ratio of left to right EEG amplitudes (L/R) and (a) temporal position of the REM period and (b) prominence of verbal activity. As expected, this relation was attenuated for those subjects showing a preference for left-handedness. Two possible explanations for the inferred increase in left-hemispheric influence during the night are suggested.

  4. Interhemispheric Transfer Time Asymmetry of Visual Information Depends on Eye Dominance: An Electrophysiological Study

    Directory of Open Access Journals (Sweden)

    Romain Chaumillon

    2018-02-01

    Full Text Available The interhemispheric transfer of information is a fundamental process in the human brain. When a visual stimulus appears eccentrically in one visual-hemifield, it will first activate the contralateral hemisphere but also the ipsilateral one with a slight delay due to the interhemispheric transfer. This interhemispheric transfer of visual information is believed to be faster from the right to the left hemisphere in right-handers. Such an asymmetry is considered as a relevant fact in the context of the lateralization of the human brain. We show here using current source density (CSD analyses of visually evoked potential (VEP that, in right-handers and, to a lesser extent in left-handers, this asymmetry is in fact dependent on the sighting eye dominance, the tendency we have to prefer one eye for monocular tasks. Indeed, in right-handers, a faster interhemispheric transfer of visual information from the right to left hemisphere was observed only in participants with a right dominant eye (DE. Right-handers with a left DE showed the opposite pattern, with a faster transfer from the left to the right hemisphere. In left-handers, albeit a smaller number of participants has been tested and hence confirmation is required, only those with a right DE showed an asymmetrical interhemispheric transfer with a faster transfer from the right to the left hemisphere. As a whole these results demonstrate that eye dominance is a fundamental determinant of asymmetries in interhemispheric transfer of visual information and suggest that it is an important factor of brain lateralization.

  5. Two distinct auditory-motor circuits for monitoring speech production as revealed by content-specific suppression of auditory cortex.

    Science.gov (United States)

    Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet

    2015-06-01

    Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  7. Efficacy of melody-based aphasia therapy may strongly depend on rhythm and conversational speech formulas

    Directory of Open Access Journals (Sweden)

    Benjamin Stahl

    2014-04-01

    Full Text Available Left-hemisphere stroke patients suffering from language and speech disorders are often able to sing entire pieces of text fluently. This finding has inspired a number of melody-based rehabilitation programs – most notable among them a treatment known as Melodic Intonation Therapy – as well as two fundamental research questions. When the experimental design focuses on one point in time (cross section, one may determine whether or not singing has an immediate effect on syllable production in patients with language and speech disorders. When the design focuses on changes over several points in time (longitudinal section, one may gain insight as to whether or not singing has a long-term effect on language and speech recovery. The current work addresses both of these questions with two separate experiments that investigate the interplay of melody, rhythm and lyric type in 32 patients with non-fluent aphasia and apraxia of speech (Stahl et al., 2011; Stahl et al., 2013. Taken together, the experiments deliver three main results. First, singing and rhythmic pacing proved to be equally effective in facilitating immediate syllable production and long-term language and speech recovery. Controlling for various influences such as prosody, syllable duration and phonetic complexity, the data did not reveal any advantage of singing over rhythmic speech. This result was independent of lesion size and lesion location in the patients. Second, patients with extensive left-sided basal ganglia lesions produced more correct syllables when their speech was paced by rhythmic drumbeats. This observation is consistent with the idea that regular auditory cues may partially compensate for corticostriatal damage and thereby improve speech-motor planning (Grahn & Watson, 2013. Third, conversational speech formulas and well-known song lyrics yielded higher rates of correct syllable production than novel word sequences – whether patients were singing or speaking

  8. Does brain injury impair speech and gesture differently?

    Directory of Open Access Journals (Sweden)

    Tilbe Göksun

    2016-09-01

    Full Text Available People often use spontaneous gestures when talking about space, such as when giving directions. In a recent study from our lab, we examined whether focal brain-injured individuals’ naming motion event components of manner and path (represented in English by verbs and prepositions, respectively are impaired selectively, and whether gestures compensate for impairment in speech. Left or right hemisphere damaged patients and elderly control participants were asked to describe motion events (e.g., walking around depicted in brief videos. Results suggest that producing verbs and prepositions can be separately impaired in the left hemisphere and gesture production compensates for naming impairments when damage involves specific areas in the left temporal cortex.

  9. Influence of intensive phonomotor rehabilitation on apraxia of speech.

    Science.gov (United States)

    Kendall, Diane L; Rodriguez, Amy D; Rosenbek, John C; Conway, Tim; Gonzalez Rothi, Leslie J

    2006-01-01

    In this phase I rehabilitation study, we investigated the effects of an intensive phonomotor rehabilitation program on verbal production in a 73-year-old male, 11 years postonset a left-hemisphere stroke, who exhibited apraxia of speech and aphasia. In the context of a single-subject design, we studied whether treatment would improve phoneme production and generalize to repetition of multisyllabic words, words of increasing length, discourse, and measures of self-report. We predicted that a predominant motor impairment would respond to intensive phonomotor rehabilitation. While able to learn to produce individual sounds, the subject did not exhibit generalization to other aspects of motor production. Discourse production was judged perceptually slower in rate and less effortful, but also less natural. Finally, self-report indicated less apprehension toward speaking with unfamiliar people, increased telephone use, and increased ease of communication.

  10. Post-Surgical Language Reorganization Occurs in Tumors of the Dominant and Non-Dominant Hemisphere.

    Science.gov (United States)

    Avramescu-Murphy, M; Hattingen, E; Forster, M-T; Oszvald, A; Anti, S; Frisch, S; Russ, M O; Jurcoane, A

    2017-09-01

    Surgical resection of brain tumors may shift the location of cortical language areas. Studies of language reorganization primarily investigated left-hemispheric tumors irrespective of hemispheric language dominance. We used functional magnetic resonance imaging (fMRI) to investigate how tumors influence post-surgical language reorganization in relation to the dominant language areas. A total of, 17 patients with brain tumors (16 gliomas, one metastasis) in the frontotemporal and lower parietal lobes planned for awake surgery underwent pre-surgical and post-surgical language fMRI. Language activation post-to-pre surgery was evaluated visually and quantitatively on the statistically thresholded images on patient-by-patient basis. Results were qualitatively compared between three patient groups: temporal, with tumors in the dominant temporal lobe, frontal, with tumors in the dominant frontal lobe and remote, with tumors in the non-dominant hemisphere. Post-to-pre-surgical distributions of activated voxels changed in all except the one patient with metastasis. Changes were more pronounced in the dominant hemisphere for all three groups, showing increased number of activated voxels and also new activation areas. Tumor resection in the dominant hemisphere (frontal and temporal) shifted the activation from frontal towards temporal, whereas tumor resection in the non-dominant hemisphere shifted the activation from temporal towards frontal dominant areas. Resection of gliomas in the dominant and in the non-dominant hemisphere induces postsurgical shifts and increase in language activation, indicating that infiltrating gliomas have a widespread influence on the language network. The dominant hemisphere gained most of the language activation irrespective of tumor localization, possibly reflecting recovery of pre-surgical tumor-induced suppression of these activations.

  11. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  12. A Preliminary fMRI Study of a Novel Self-Paced Written Fluency Task: Observation of Left-Hemispheric Activation, and Increased Frontal Activation in Late vs. Early Task Phases

    Directory of Open Access Journals (Sweden)

    Laleh eGolestanirad

    2015-03-01

    Full Text Available Neuropsychological tests of verbal fluency are very widely used to characterize impaired cognitive function. For clinical neuroscience studies and potential medical applications, measuring the brain activity that underlies such tests with functional magnetic resonance imaging (fMRI is of significant interest - but a challenging proposition because overt speech can cause signal artifacts, which tend to worsen as the duration of speech tasks becomes longer. In a novel approach, we present the group brain activity of 12 subjects who performed a self-paced written version of phonemic fluency using fMRI-compatible tablet technology that recorded responses and provided task-related feedback on a projection screen display, over long-duration task blocks (60 s. As predicted, we observed robust activation in the left anterior inferior and medial frontal gyri, consisting with previously reported results of verbal fluency tasks which established the role of these areas in strategic word retrieval. In addition, the number of words produced in the late phase (last 30 s of written phonemic fluency was significantly less (p < 0.05 than the number produced in the early phase (first 30 s. Activation during the late phase vs. the early phase was also assessed from the first 20 s and last 20 s of task performance, which eliminated the possibility that the sluggish hemodynamic response from the early phase would affect the activation estimates of the late phase. The last 20 s produced greater activation maps covering extended areas in bilateral precuneus, cuneus, middle temporal gyrus, insula, middle frontal gyrus and cingulate gyrus. Among them, greater activation was observed in the bilateral middle frontal gyrus (Brodmann area BA 9 and cingulate gyrus (BA 24, 32 likely as part of the initiation, maintenance, and shifting of attentional resources.

  13. A preliminary fMRI study of a novel self-paced written fluency task: observation of left-hemispheric activation, and increased frontal activation in late vs. early task phases.

    Science.gov (United States)

    Golestanirad, Laleh; Das, Sunit; Schweizer, Tom A; Graham, Simon J

    2015-01-01

    Neuropsychological tests of verbal fluency are very widely used to characterize impaired cognitive function. For clinical neuroscience studies and potential medical applications, measuring the brain activity that underlies such tests with functional magnetic resonance imaging (fMRI) is of significant interest-but a challenging proposition because overt speech can cause signal artifacts, which tend to worsen as the duration of speech tasks becomes longer. In a novel approach, we present the group brain activity of 12 subjects who performed a self-paced written version of phonemic fluency using fMRI-compatible tablet technology that recorded responses and provided task-related feedback on a projection screen display, over long-duration task blocks (60 s). As predicted, we observed robust activation in the left anterior inferior and medial frontal gyri, consistent with previously reported results of verbal fluency tasks which established the role of these areas in strategic word retrieval. In addition, the number of words produced in the late phase (last 30 s) of written phonemic fluency was significantly less (p < 0.05) than the number produced in the early phase (first 30 s). Activation during the late phase vs. the early phase was also assessed from the first 20 s and last 20 s of task performance, which eliminated the possibility that the sluggish hemodynamic response from the early phase would affect the activation estimates of the late phase. The last 20 s produced greater activation maps covering extended areas in bilateral precuneus, cuneus, middle temporal gyrus, insula, middle frontal gyrus and cingulate gyrus. Among these areas, greater activation was observed in the bilateral middle frontal gyrus (Brodmann area BA 9) and cingulate gyrus (BA 24, 32) likely as part of the initiation, maintenance, and shifting of attentional resources. Consistent with previous pertinent fMRI literature involving overt and covert verbal responses, these findings highlight the

  14. Neurophysiological Evidence That Musical Training Influences the Recruitment of Right Hemispheric Homologues for Speech Perception

    Directory of Open Access Journals (Sweden)

    McNeel Gordon Jantzen

    2014-03-01

    Full Text Available Musicians have a more accurate temporal and tonal representation of auditory stimuli than their non-musician counterparts (Kraus & Chandrasekaran, 2010; Parbery-Clark, Skoe, & Kraus, 2009; Zendel & Alain, 2008; Musacchia, Sams, Skoe, & Kraus, 2007. Musicians who are adept at the production and perception of music are also more sensitive to key acoustic features of speech such as voice onset timing and pitch. Together, these data suggest that musical training may enhance the processing of acoustic information for speech sounds. In the current study, we sought to provide neural evidence that musicians process speech and music in a similar way. We hypothesized that for musicians, right hemisphere areas traditionally associated with music are also engaged for the processing of speech sounds. In contrast we predicted that in non-musicians processing of speech sounds would be localized to traditional left hemisphere language areas. Speech stimuli differing in voice onset time was presented using a dichotic listening paradigm. Subjects either indicated aural location for a specified speech sound or identified a specific speech sound from a directed aural location. Musical training effects and organization of acoustic features were reflected by activity in source generators of the P50. This included greater activation of right middle temporal gyrus (MTG and superior temporal gyrus (STG in musicians. The findings demonstrate recruitment of right hemisphere in musicians for discriminating speech sounds and a putative broadening of their language network. Musicians appear to have an increased sensitivity to acoustic features and enhanced selective attention to temporal features of speech that is facilitated by musical training and supported, in part, by right hemisphere homologues of established speech processing regions of the brain.

  15. Tactile Modulation of Emotional Speech Samples

    Directory of Open Access Journals (Sweden)

    Katri Salminen

    2012-01-01

    Full Text Available Traditionally only speech communicates emotions via mobile phone. However, in daily communication the sense of touch mediates emotional information during conversation. The present aim was to study if tactile stimulation affects emotional ratings of speech when measured with scales of pleasantness, arousal, approachability, and dominance. In the Experiment 1 participants rated speech-only and speech-tactile stimuli. The tactile signal mimicked the amplitude changes of the speech. In the Experiment 2 the aim was to study whether the way the tactile signal was produced affected the ratings. The tactile signal either mimicked the amplitude changes of the speech sample in question, or the amplitude changes of another speech sample. Also, concurrent static vibration was included. The results showed that the speech-tactile stimuli were rated as more arousing and dominant than the speech-only stimuli. The speech-only stimuli were rated as more approachable than the speech-tactile stimuli, but only in the Experiment 1. Variations in tactile stimulation also affected the ratings. When the tactile stimulation was static vibration the speech-tactile stimuli were rated as more arousing than when the concurrent tactile stimulation was mimicking speech samples. The results suggest that tactile stimulation offers new ways of modulating and enriching the interpretation of speech.

  16. The Effects of Fluency Enhancing Conditions on Sensorimotor Control of Speech in Typically Fluent Speakers: An EEG Mu Rhythm Study

    Directory of Open Access Journals (Sweden)

    Tiffani Kittilstved

    2018-04-01

    Full Text Available Objective: To determine whether changes in sensorimotor control resulting from speaking conditions that induce fluency in people who stutter (PWS can be measured using electroencephalographic (EEG mu rhythms in neurotypical speakers.Methods: Non-stuttering (NS adults spoke in one control condition (solo speaking and four experimental conditions (choral speech, delayed auditory feedback (DAF, prolonged speech and pseudostuttering. Independent component analysis (ICA was used to identify sensorimotor μ components from EEG recordings. Time-frequency analyses measured μ-alpha (8–13 Hz and μ-beta (15–25 Hz event-related synchronization (ERS and desynchronization (ERD during each speech condition.Results: 19/24 participants contributed μ components. Relative to the control condition, the choral and DAF conditions elicited increases in μ-alpha ERD in the right hemisphere. In the pseudostuttering condition, increases in μ-beta ERD were observed in the left hemisphere. No differences were present between the prolonged speech and control conditions.Conclusions: Differences observed in the experimental conditions are thought to reflect sensorimotor control changes. Increases in right hemisphere μ-alpha ERD likely reflect increased reliance on auditory information, including auditory feedback, during the choral and DAF conditions. In the left hemisphere, increases in μ-beta ERD during pseudostuttering may have resulted from the different movement characteristics of this task compared with the solo speaking task. Relationships to findings in stuttering are discussed.Significance: Changes in sensorimotor control related feedforward and feedback control in fluency-enhancing speech manipulations can be measured using time-frequency decompositions of EEG μ rhythms in neurotypical speakers. This quiet, non-invasive, and temporally sensitive technique may be applied to learn more about normal sensorimotor control and fluency enhancement in PWS.

  17. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  18. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  19. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  20. Three- and four-dimensional mapping of speech and language in patients with epilepsy

    Science.gov (United States)

    Nakai, Yasuo; Jeong, Jeong-won; Brown, Erik C.; Rothermel, Robert; Kojima, Katsuaki; Kambara, Toshimune; Shah, Aashit; Mittal, Sandeep; Sood, Sandeep

    2017-01-01

    We have provided 3-D and 4D mapping of speech and language function based upon the results of direct cortical stimulation and event-related modulation of electrocorticography signals. Patients estimated to have right-hemispheric language dominance were excluded. Thus, 100 patients who underwent two-stage epilepsy surgery with chronic electrocorticography recording were studied. An older group consisted of 84 patients at least 10 years of age (7367 artefact-free non-epileptic electrodes), whereas a younger group included 16 children younger than age 10 (1438 electrodes). The probability of symptoms transiently induced by electrical stimulation was delineated on a 3D average surface image. The electrocorticography amplitude changes of high-gamma (70–110 Hz) and beta (15–30 Hz) activities during an auditory-naming task were animated on the average surface image in a 4D manner. Thereby, high-gamma augmentation and beta attenuation were treated as summary measures of cortical activation. Stimulation data indicated the causal relationship between (i) superior-temporal gyrus of either hemisphere and auditory hallucination; (ii) left superior-/middle-temporal gyri and receptive aphasia; (iii) widespread temporal/frontal lobe regions of the left hemisphere and expressive aphasia; and (iv) bilateral precentral/left posterior superior-frontal regions and speech arrest. On electrocorticography analysis, high-gamma augmentation involved the bilateral superior-temporal and precentral gyri immediately following question onset; at the same time, high-gamma activity was attenuated in the left orbitofrontal gyrus. High-gamma activity was augmented in the left temporal/frontal lobe regions, as well as left inferior-parietal and cingulate regions, maximally around question offset, with high-gamma augmentation in the left pars orbitalis inferior-frontal, middle-frontal, and inferior-parietal regions preceded by high-gamma attenuation in the contralateral homotopic regions

  1. Hemodynamic responses to speech and music in newborn infants.

    Science.gov (United States)

    Kotilahti, Kalle; Nissilä, Ilkka; Näsi, Tiina; Lipiäinen, Lauri; Noponen, Tommi; Meriläinen, Pekka; Huotilainen, Minna; Fellman, Vineta

    2010-04-01

    We used near-infrared spectroscopy (NIRS) to study responses to speech and music on the auditory cortices of 13 healthy full-term newborn infants during natural sleep. The purpose of the study was to investigate the lateralization of speech and music responses at this stage of development. NIRS data was recorded from eight positions on both hemispheres simultaneously with electroencephalography, electrooculography, electrocardiography, pulse oximetry, and inclinometry. In 11 subjects, statistically significant (P < 0.02) oxygenated (HbO2) and total hemoglobin (HbT) responses were recorded. Both stimulus types elicited significant HbO2 and HbT responses on both hemispheres in five subjects. Six of the 11 subjects had positive HbO2 and HbT responses to both stimulus types, whereas one subject had negative responses. Mixed positive and negative responses were observed in four neonates. On both hemispheres, speech and music responses were significantly correlated (r = 0.64; P = 0.018 on the left hemisphere (LH) and r = 0.60; P = 0.029 on the right hemisphere (RH)). On the group level, the average response to the speech stimuli was statistically significantly greater than zero in the LH, whereas responses on the RH or to the music stimuli did not differ significantly from zero. This suggests a more coherent response to speech on the LH. However, significant differences in lateralization of the responses or mean response amplitudes of the two stimulus types were not observed on the group level. Copyright 2009 Wiley-Liss, Inc.

  2. Left dorsal speech stream components and their contribution to phonological processing.

    Science.gov (United States)

    Murakami, Takenobu; Kell, Christian A; Restle, Julia; Ugawa, Yoshikazu; Ziemann, Ulf

    2015-01-28

    Models propose an auditory-motor mapping via a left-hemispheric dorsal speech-processing stream, yet its detailed contributions to speech perception and production are unclear. Using fMRI-navigated repetitive transcranial magnetic stimulation (rTMS), we virtually lesioned left dorsal stream components in healthy human subjects and probed the consequences on speech-related facilitation of articulatory motor cortex (M1) excitability, as indexed by increases in motor-evoked potential (MEP) amplitude of a lip muscle, and on speech processing performance in phonological tests. Speech-related MEP facilitation was disrupted by rTMS of the posterior superior temporal sulcus (pSTS), the sylvian parieto-temporal region (SPT), and by double-knock-out but not individual lesioning of pars opercularis of the inferior frontal gyrus (pIFG) and the dorsal premotor cortex (dPMC), and not by rTMS of the ventral speech-processing stream or an occipital control site. RTMS of the dorsal stream but not of the ventral stream or the occipital control site caused deficits specifically in the processing of fast transients of the acoustic speech signal. Performance of syllable and pseudoword repetition correlated with speech-related MEP facilitation, and this relation was abolished with rTMS of pSTS, SPT, and pIFG. Findings provide direct evidence that auditory-motor mapping in the left dorsal stream causes reliable and specific speech-related MEP facilitation in left articulatory M1. The left dorsal stream targets the articulatory M1 through pSTS and SPT constituting essential posterior input regions and parallel via frontal pathways through pIFG and dPMC. Finally, engagement of the left dorsal stream is necessary for processing of fast transients in the auditory signal. Copyright © 2015 the authors 0270-6474/15/351411-12$15.00/0.

  3. Channel normalization technique for speech recognition in mismatched conditions

    CSIR Research Space (South Africa)

    Kleynhans, N

    2008-11-01

    Full Text Available The performance of trainable speech-processing systems deteriorates significantly when there is a mismatch between the training and testing data. The data mismatch becomes a dominant factor when collecting speech data for resource scarce languages...

  4. Lateralization of brain activation to imagination and smell of odors using functional magnetic resonance imaging (fMRI): left hemispheric localization of pleasant and right hemispheric localization of unpleasant odors.

    Science.gov (United States)

    Henkin, R I; Levy, L M

    2001-01-01

    theophylline treatment. In the hyposmic patient studied with EPI before theophylline treatment, activation to banana and peppermint odor imagination and to amyl acetate, menthone, and pyridine smell was R > L hemisphere; after theophylline treatment restored normal smell function, activation shifted completely with banana and peppermint odor imagination and amyl acetate and menthone smell to L > R hemisphere, consistent with responses in normal subjects. However, this shift also occurred for pyridine smell, which is opposite to responses in normal control subjects. In patients with phantosmia and phantogeusia, activation to phantosmia and phantogeusia before treatment was R > L hemisphere; after treatment inhibited phantosmia and phantogeusia, activation shifted with a slight L > R hemispheric lateralization. Localization of all lateralized responses indicated that anterior frontal and temporal cortices were brain regions most involved with imagination and smell of odors and with phantosmia and phantogeusia presence. Imagination and smell of odors perceived as pleasant generally activated the dominant or L > R brain hemisphere. Smell of odors perceived as unpleasant and unpleasant phantosmia and phantogeusia generally activated the contralateral or R > L brain hemisphere. With remission of phantosmia and phantogeusia, hemispheric activation was not only inhibited, but also there was a slight shift to L > R hemispheric predominance. Predominant L > R hemispheric differences in brain activation in normal subjects occurred in the order amyl acetate > menthone > pyridine, consistent with the hypothesis that pleasant odors are more appreciated in L hemisphere and unpleasant odors more in R hemisphere. Anterior frontal and temporal cortex regions previously found activated by imagination and smell of odors and phantosmia and phantogeusia perception accounted for most hemispheric differences.

  5. Neural Entrainment to Speech Modulates Speech Intelligibility

    OpenAIRE

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and acoustic speech signal, listening task, and speech intelligibility have been observed repeatedly. However, a methodological bottleneck has prevented so far clarifying whether speech-brain entrainme...

  6. Speech Development

    Science.gov (United States)

    ... are placed in the mouth, much like an orthodontic retainer. The two most common types are 1) the speech bulb and 2) the palatal lift. The speech bulb is designed to partially close off the space between the soft palate and the throat. The palatal lift appliance serves to lift the soft palate to a ...

  7. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  8. Ictal speech and language dysfunction in adult epilepsy: Clinical study of 95 seizures.

    Science.gov (United States)

    Dussaule, C; Cauquil, C; Flamand-Roze, C; Gagnepain, J-P; Bouilleret, V; Denier, C; Masnou, P

    2017-04-01

    To analyze the semiological characteristics of the language and speech disorders arising during epileptic seizures, and to describe the patterns of language and speech disorders that can predict laterality of the epileptic focus. This study retrospectively analyzed 95 consecutive videos of seizures with language and/or speech disorders in 44 patients admitted for diagnostic video-EEG monitoring. Laterality of the epileptic focus was defined according to electro-clinical correlation studies and structural and functional neuroimaging findings. Language and speech disorders were analyzed by a neurologist and a speech therapist blinded to these data. Language and/or speech disorders were subdivided into eight dynamic patterns: pure anterior aphasia; anterior aphasia and vocal; anterior aphasia and "arthria"; pure posterior aphasia; posterior aphasia and vocal; pure vocal; vocal and arthria; and pure arthria. The epileptic focus was in the left hemisphere in more than 4/5 of seizures presenting with pure anterior aphasia or pure posterior aphasia patterns, while discharges originated in the right hemisphere in almost 2/3 of seizures presenting with a pure vocal pattern. No laterality value was found for the other patterns. Classification of the language and speech disorders arising during epileptic seizures into dynamic patterns may be useful for the optimal analysis of anatomo-electro-clinical correlations. In addition, our research has led to the development of standardized tests for analyses of language and speech disorders arising during seizures that can be conducted during video-EEG sessions. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  9. Descriptive study of 192 adults with speech and language disturbances

    Directory of Open Access Journals (Sweden)

    Letícia Lessa Mansur

    Full Text Available CONTEXT: Aphasia is a very disabling condition caused by neurological diseases. In Brazil, we have little data on the profile of aphasics treated in rehabilitation centers. OBJECTIVE: To present a descriptive study of 192 patients, providing a reference sample of speech and language disturbances among Brazilians. DESIGN: Retrospective study. SETTING: Speech Pathology Unit linked to the Neurology Division of the Hospital das Clínicas of the Faculdade de Medicina da Universidade de São Paulo. SAMPLE: All patients (192 referred to our Speech Pathology service from 1995 to 2000. PROCEDURES: We collected data relating to demographic variables, etiology, language evaluation (functional evaluation, Boston Diagnostic Aphasia Examination, Boston Naming and Token Test, and neuroimaging studies. MAIN MEASUREMENTS: The results obtained in language tests and the clinical and neuroimaging data were organized and classified. Seventy aphasics were chosen for constructing a profile. Fourteen subjects with left single-lobe dysfunction were analyzed in detail. Seventeen aphasics were compared with 17 normal subjects, all performing the Token Test. RESULTS: One hundred subjects (52% were men and 92 (48% women. Their education varied from 0 to 16 years (average: 6.5; standard deviation: 4.53. We identified the lesion sites in 104 patients: 89% in the left hemisphere and 58% due to stroke. The incidence of aphasia was 70%; dysarthria and apraxia, 6%; functional alterations in communication, 17%; and 7% were normal. Statistically significant differences appeared when comparing the subgroup to controls in the Token Test. CONCLUSIONS: We believe that this sample contributes to a better understanding of neurological patients with speech and language disturbances and may be useful as a reference for health professionals involved in the rehabilitation of such disorders.

  10. Cross-language differences in the brain network subserving intelligible speech.

    Science.gov (United States)

    Ge, Jianqiao; Peng, Gang; Lyu, Bingjiang; Wang, Yi; Zhuo, Yan; Niu, Zhendong; Tan, Li Hai; Leff, Alexander P; Gao, Jia-Hong

    2015-03-10

    How is language processed in the brain by native speakers of different languages? Is there one brain system for all languages or are different languages subserved by different brain systems? The first view emphasizes commonality, whereas the second emphasizes specificity. We investigated the cortical dynamics involved in processing two very diverse languages: a tonal language (Chinese) and a nontonal language (English). We used functional MRI and dynamic causal modeling analysis to compute and compare brain network models exhaustively with all possible connections among nodes of language regions in temporal and frontal cortex and found that the information flow from the posterior to anterior portions of the temporal cortex was commonly shared by Chinese and English speakers during speech comprehension, whereas the inferior frontal gyrus received neural signals from the left posterior portion of the temporal cortex in English speakers and from the bilateral anterior portion of the temporal cortex in Chinese speakers. Our results revealed that, although speech processing is largely carried out in the common left hemisphere classical language areas (Broca's and Wernicke's areas) and anterior temporal cortex, speech comprehension across different language groups depends on how these brain regions interact with each other. Moreover, the right anterior temporal cortex, which is crucial for tone processing, is equally important as its left homolog, the left anterior temporal cortex, in modulating the cortical dynamics in tone language comprehension. The current study pinpoints the importance of the bilateral anterior temporal cortex in language comprehension that is downplayed or even ignored by popular contemporary models of speech comprehension.

  11. Error Consistency in Acquired Apraxia of Speech With Aphasia: Effects of the Analysis Unit.

    Science.gov (United States)

    Haley, Katarina L; Cunningham, Kevin T; Eaton, Catherine Torrington; Jacks, Adam

    2018-02-15

    Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain discrepancies in error consistency results based on the unit of analysis (segment, syllable, or word) to help determine which diagnostic recommendation is most appropriate. We analyzed speech samples from 14 left-hemisphere stroke survivors with clinical diagnoses of AOS and aphasia. Each participant produced 3 multisyllabic words 5 times in succession. Broad phonetic transcriptions of these productions were coded for consistency of error location and type using the word and its constituent syllables and sound segments as units of analysis. Consistency of error type varied systematically with the unit of analysis, showing progressively greater consistency as the analysis unit changed from the word to the syllable and then to the sound segment. Consistency of error location varied considerably across participants and correlated positively with error frequency. Low to moderate consistency of error type at the word level confirms original diagnostic accounts of speech output and sound errors in AOS as variable in form. Moderate to high error type consistency at the syllable and sound levels indicate that phonetic error patterns are present. The results are complementary and logically compatible with each other and with the literature.

  12. Participation of the classical speech areas in auditory long-term memory.

    Directory of Open Access Journals (Sweden)

    Anke Ninija Karabanov

    Full Text Available Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG and in the inferior frontal gyrus (IFG may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results.

  13. Listening to an Audio Drama Activates Two Processing Networks, One for All Sounds, Another Exclusively for Speech

    Science.gov (United States)

    Boldt, Robert; Malinen, Sanna; Seppä, Mika; Tikka, Pia; Savolainen, Petri; Hari, Riitta; Carlson, Synnöve

    2013-01-01

    Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI). An intersubject-correlation (ISC) map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC) analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two–covering non-overlapping areas of the auditory cortex–were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds. PMID:23734202

  14. Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech.

    Directory of Open Access Journals (Sweden)

    Robert Boldt

    Full Text Available Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI. An intersubject-correlation (ISC map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.

  15. Auditory Processing after Early Left Hemisphere Injury: A Case Report

    Directory of Open Access Journals (Sweden)

    Cristina Ferraz Borges Murphy

    2017-05-01

    Full Text Available Few studies have addressed the long-term outcomes of early brain injury, especially after hemorrhagic stroke. This is the first study to report a case of acquired auditory processing disorder in a 10-year-old child who had a severe left hemorrhagic cerebral infarction at 13 months of age, compromising nearly all of the left temporal lobe. This case, therefore, is an excellent and rare opportunity to investigate the presence of neural plasticity of central auditory system in a developing brain followed severe brain damage. After assuring normal functioning of the peripheral auditory system, a series of behavioral auditory processing tests was applied in dichotic and monaural listening conditions and with verbal and non-verbal stimuli. For all verbal dichotic tasks (dichotic digits, competing words, and sentences tests, good performance on the left ear, especially for Dichotic digits test (100%, and zero performance on the right ear were observed. For monaural low-redundancy tests, the patient also exhibited good performance for auditory figure-ground and time-compressed sentences tests in the left ear. In the right ear, a very poor performance was observed, but slightly better than the same in Dichotic tasks. Impaired performance was also observed in the LiSN test in terms of spatial advantage and, for the Pitch Pattern Sequence test, the only non-verbal test applied, the patient had performance within the normal range in both ears. These results are interpreted taking into consideration the anatomical location of stroke lesion and also the influence of hemispheric specialization for language on auditory processing performance.

  16. Speech repetition as a window on the neurobiology of auditory-motor integration for speech: A voxel-based lesion symptom mapping study.

    Science.gov (United States)

    Rogalsky, Corianne; Poppa, Tasha; Chen, Kuan-Hua; Anderson, Steven W; Damasio, Hanna; Love, Tracy; Hickok, Gregory

    2015-05-01

    For more than a century, speech repetition has been used as an assay for gauging the integrity of the auditory-motor pathway in aphasia, thought classically to involve a linkage between Wernicke's area and Broca's area via the arcuate fasciculus. During the last decade, evidence primarily from functional imaging in healthy individuals has refined this picture both computationally and anatomically, suggesting the existence of a cortical hub located at the parietal-temporal boundary (area Spt) that functions to integrate auditory and motor speech networks for both repetition and spontaneous speech production. While functional imaging research can pinpoint the regions activated in repetition/auditory-motor integration, lesion-based studies are needed to infer causal involvement. Previous lesion studies of repetition have yielded mixed results with respect to Spt's critical involvement in speech repetition. The present study used voxel-based lesion symptom mapping (VLSM) to investigate the neuroanatomy of repetition of both real words and non-words in a sample of 47 patients with focal left hemisphere brain damage. VLSMs identified a large voxel cluster spanning gray and white matter in the left temporal-parietal junction, including area Spt, where damage was significantly related to poor non-word repetition. Repetition of real words implicated a very similar dorsal network including area Spt. Cortical regions including Spt were implicated in repetition performance even when white matter damage was factored out. In addition, removing variance associated with speech perception abilities did not alter the overall lesion pattern for either task. Together with past functional imaging work, our results suggest that area Spt is integral in both word and non-word repetition, that its contribution is above and beyond that made by white matter pathways, and is not driven by perceptual processes alone. These findings are highly consistent with the claim that Spt is an area of

  17. Neural entrainment to speech modulates speech intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Başkent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  18. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  19. Pragmatic Study of Directive Speech Acts in Stories in Alquran

    Science.gov (United States)

    Santosa, Rochmat Budi; Nurkamto, Joko; Baidan, Nashruddin; Sumarlam

    2016-01-01

    This study aims at describing the directive speech acts in the verses that contain the stories in the Qur'an. Specifically, the objectives of this study are to assess the sub directive speech acts contained in the verses of the stories and the dominant directive speech acts. The research target is the verses ("ayat") containing stories…

  20. Apraxia of Speech

    Science.gov (United States)

    ... here Home » Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... additional information about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  1. Human neuromagnetic steady-state responses to amplitude-modulated tones, speech, and music.

    Science.gov (United States)

    Lamminmäki, Satu; Parkkonen, Lauri; Hari, Riitta

    2014-01-01

    Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears' inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs. MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales. The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas

  2. Altered resting-state network connectivity in stroke patients with and without apraxia of speech.

    Science.gov (United States)

    New, Anneliese B; Robin, Donald A; Parkinson, Amy L; Duffy, Joseph R; McNeil, Malcom R; Piguet, Olivier; Hornberger, Michael; Price, Cathy J; Eickhoff, Simon B; Ballard, Kirrie J

    2015-01-01

    Motor speech disorders, including apraxia of speech (AOS), account for over 50% of the communication disorders following stroke. Given its prevalence and impact, and the need to understand its neural mechanisms, we used resting state functional MRI to examine functional connectivity within a network of regions previously hypothesized as being associated with AOS (bilateral anterior insula (aINS), inferior frontal gyrus (IFG), and ventral premotor cortex (PM)) in a group of 32 left hemisphere stroke patients and 18 healthy, age-matched controls. Two expert clinicians rated severity of AOS, dysarthria and nonverbal oral apraxia of the patients. Fifteen individuals were categorized as AOS and 17 were AOS-absent. Comparison of connectivity in patients with and without AOS demonstrated that AOS patients had reduced connectivity between bilateral PM, and this reduction correlated with the severity of AOS impairment. In addition, AOS patients had negative connectivity between the left PM and right aINS and this effect decreased with increasing severity of non-verbal oral apraxia. These results highlight left PM involvement in AOS, begin to differentiate its neural mechanisms from those of other motor impairments following stroke, and help inform us of the neural mechanisms driving differences in speech motor planning and programming impairment following stroke.

  3. Altered resting-state network connectivity in stroke patients with and without apraxia of speech

    Directory of Open Access Journals (Sweden)

    Anneliese B. New

    2015-01-01

    Full Text Available Motor speech disorders, including apraxia of speech (AOS, account for over 50% of the communication disorders following stroke. Given its prevalence and impact, and the need to understand its neural mechanisms, we used resting state functional MRI to examine functional connectivity within a network of regions previously hypothesized as being associated with AOS (bilateral anterior insula (aINS, inferior frontal gyrus (IFG, and ventral premotor cortex (PM in a group of 32 left hemisphere stroke patients and 18 healthy, age-matched controls. Two expert clinicians rated severity of AOS, dysarthria and nonverbal oral apraxia of the patients. Fifteen individuals were categorized as AOS and 17 were AOS-absent. Comparison of connectivity in patients with and without AOS demonstrated that AOS patients had reduced connectivity between bilateral PM, and this reduction correlated with the severity of AOS impairment. In addition, AOS patients had negative connectivity between the left PM and right aINS and this effect decreased with increasing severity of non-verbal oral apraxia. These results highlight left PM involvement in AOS, begin to differentiate its neural mechanisms from those of other motor impairments following stroke, and help inform us of the neural mechanisms driving differences in speech motor planning and programming impairment following stroke.

  4. Right Hemisphere Dominance in Visual Statistical Learning

    Science.gov (United States)

    Roser, Matthew E.; Fiser, Jozsef; Aslin, Richard N.; Gazzaniga, Michael S.

    2011-01-01

    Several studies report a right hemisphere advantage for visuospatial integration and a left hemisphere advantage for inferring conceptual knowledge from patterns of covariation. The present study examined hemispheric asymmetry in the implicit learning of new visual feature combinations. A split-brain patient and normal control participants viewed…

  5. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  6. Speech dynamics are coded in the left motor cortex in fluent speakers but not in adults who stutter.

    Science.gov (United States)

    Neef, Nicole E; Hoang, T N Linh; Neef, Andreas; Paulus, Walter; Sommer, Martin

    2015-03-01

    The precise excitability regulation of neuronal circuits in the primary motor cortex is central to the successful and fluent production of speech. Our question was whether the involuntary execution of undesirable movements, e.g. stuttering, is linked to an insufficient excitability tuning of neural populations in the orofacial region of the primary motor cortex. We determined the speech-related time course of excitability modulation in the left and right primary motor tongue representation. Thirteen fluent speakers (four females, nine males; aged 23-44) and 13 adults who stutter (four females, nine males, aged 21-55) were asked to build verbs with the verbal prefix 'auf'. Single-pulse transcranial magnetic stimulation was applied over the primary motor cortex during the transition phase between a fixed labiodental articulatory configuration and immediately following articulatory configurations, at different latencies after transition onset. Bilateral electromyography was recorded from self-adhesive electrodes placed on the surface of the tongue. Off-line, we extracted the motor evoked potential amplitudes and normalized these amplitudes to the individual baseline excitability during the fixed configuration. Fluent speakers demonstrated a prominent left hemisphere increase of motor cortex excitability in the transition phase (P = 0.009). In contrast, the excitability of the right primary motor tongue representation was unchanged. Interestingly, adults afflicted with stuttering revealed a lack of left-hemisphere facilitation. Moreover, the magnitude of facilitation was negatively correlated with stuttering frequency. Although orofacial midline muscles are bilaterally innervated from corticobulbar projections of both hemispheres, our results indicate that speech motor plans are controlled primarily in the left primary speech motor cortex. This speech motor planning-related asymmetry towards the left orofacial motor cortex is missing in stuttering. Moreover, a negative

  7. Lesion localization of speech comprehension deficits in chronic aphasia.

    Science.gov (United States)

    Pillay, Sara B; Binder, Jeffrey R; Humphries, Colin; Gross, William L; Book, Diane S

    2017-03-07

    Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. © 2017 American Academy of Neurology.

  8. Mapping the brain's orchestration during speech comprehension: task-specific facilitation of regional synchrony in neural networks

    Directory of Open Access Journals (Sweden)

    Keil Andreas

    2004-10-01

    Full Text Available Abstract Background How does the brain convert sounds and phonemes into comprehensible speech? In the present magnetoencephalographic study we examined the hypothesis that the coherence of electromagnetic oscillatory activity within and across brain areas indicates neurophysiological processes linked to speech comprehension. Results Amplitude-modulated (sinusoidal 41.5 Hz auditory verbal and nonverbal stimuli served to drive steady-state oscillations in neural networks involved in speech comprehension. Stimuli were presented to 12 subjects in the following conditions (a an incomprehensible string of words, (b the same string of words after being introduced as a comprehensible sentence by proper articulation, and (c nonverbal stimulations that included a 600-Hz tone, a scale, and a melody. Coherence, defined as correlated activation of magnetic steady state fields across brain areas and measured as simultaneous activation of current dipoles in source space (Minimum-Norm-Estimates, increased within left- temporal-posterior areas when the sound string was perceived as a comprehensible sentence. Intra-hemispheric coherence was larger within the left than the right hemisphere for the sentence (condition (b relative to all other conditions, and tended to be larger within the right than the left hemisphere for nonverbal stimuli (condition (c, tone and melody relative to the other conditions, leading to a more pronounced hemispheric asymmetry for nonverbal than verbal material. Conclusions We conclude that coherent neuronal network activity may index encoding of verbal information on the sentence level and can be used as a tool to investigate auditory speech comprehension.

  9. Speech Enhancement

    DEFF Research Database (Denmark)

    Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    and their performance bounded and assessed in terms of noise reduction and speech distortion. The book shows how various filter designs can be obtained in this framework, including the maximum SNR, Wiener, LCMV, and MVDR filters, and how these can be applied in various contexts, like in single-channel and multichannel...

  10. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  11. 78 FR 49717 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay...

  12. Speech-induced striatal dopamine release is left lateralized and coupled to functional striatal circuits in healthy humans: A combined PET, fMRI and DTI study

    Science.gov (United States)

    Simonyan, Kristina; Herscovitch, Peter; Horwitz, Barry

    2013-01-01

    Considerable progress has been recently made in understanding the brain mechanisms underlying speech and language control. However, the neurochemical underpinnings of normal speech production remain largely unknown. We investigated the extent of striatal endogenous dopamine release and its influences on the organization of functional striatal speech networks during production of meaningful English sentences using a combination of positron emission tomography (PET) with the dopamine D2/D3 receptor radioligand [11C]raclopride and functional MRI (fMRI). In addition, we used diffusion tensor tractography (DTI) to examine the extent of dopaminergic modulatory influences on striatal structural network organization. We found that, during sentence production, endogenous dopamine was released in the ventromedial portion of the dorsal striatum, in its both associative and sensorimotor functional divisions. In the associative striatum, speech-induced dopamine release established a significant relationship with neural activity and influenced the left-hemispheric lateralization of striatal functional networks. In contrast, there were no significant effects of endogenous dopamine release on the lateralization of striatal structural networks. Our data provide the first evidence for endogenous dopamine release in the dorsal striatum during normal speaking and point to the possible mechanisms behind the modulatory influences of dopamine on the organization of functional brain circuits controlling normal human speech. PMID:23277111

  13. Investigating the neural correlates of voice versus speech-sound directed information in pre-school children.

    Directory of Open Access Journals (Sweden)

    Nora Maria Raschle

    Full Text Available Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI in 20 typically developing preschool children (average age  = 5.8 y; range 5.2-6.8 y to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments.

  14. Investigating the neural correlates of voice versus speech-sound directed information in pre-school children.

    Science.gov (United States)

    Raschle, Nora Maria; Smith, Sara Ashley; Zuk, Jennifer; Dauvermann, Maria Regina; Figuccio, Michael Joseph; Gaab, Nadine

    2014-01-01

    Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI) in 20 typically developing preschool children (average age  = 5.8 y; range 5.2-6.8 y) to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments.

  15. Determination of hemispheric language dominance using functional MRI : comparison of visual and auditory stimuli

    International Nuclear Information System (INIS)

    Yoo, Ic Ryung; Ahn, Kook Jin; Lee, Jae Mun; Kim, Tae

    1999-01-01

    To assess the difference between auditory and visual stimuli when determining hemispheric language dominance by using functional MRI. In ten healthy adult volunteers (8 right-handed, 1 left-handed, 1 ambidextrous), motor language activation in axial slices of frontal lobe was mapped on a Simens 1.5T Vision Plus system using single-shot EPI. Series of 120 consecutive images per section were acquired during three cycles of task activation and rest. During each activation, a series of four syllables was delivered by means of both a visual and auditory method, and the volunteers were asked to mentally generate words starting with each syllable. In both in ferior frontal gyri and whole frontal lobes, lateralization indices were calculated from the activated pixels. We determined the language dominant hemisphere, and compared the results of the visual method and the auditory method. Seven right-handed persons were left-hemisphere dominant, and one left-handed and one ambidex-trous person were right-hemisphere dominant. Five of nine persons demonstrated larger lateralization indices with the auditory method than the visual method, while the remaining four showed larger lateralization indices with the visual method. No statistically significant difference was noted when comparing the results of the two methods(p>0.05). When determining hemispheric language dominance using functional MRI, the two methods are equally appropriate

  16. Determination of hemispheric language dominance using functional MRI : comparison of visual and auditory stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Ic Ryung; Ahn, Kook Jin; Lee, Jae Mun [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Tae [The Catholic Magnetic Resonance Research Center, Seoul (Korea, Republic of)

    1999-12-01

    To assess the difference between auditory and visual stimuli when determining hemispheric language dominance by using functional MRI. In ten healthy adult volunteers (8 right-handed, 1 left-handed, 1 ambidextrous), motor language activation in axial slices of frontal lobe was mapped on a Simens 1.5T Vision Plus system using single-shot EPI. Series of 120 consecutive images per section were acquired during three cycles of task activation and rest. During each activation, a series of four syllables was delivered by means of both a visual and auditory method, and the volunteers were asked to mentally generate words starting with each syllable. In both in ferior frontal gyri and whole frontal lobes, lateralization indices were calculated from the activated pixels. We determined the language dominant hemisphere, and compared the results of the visual method and the auditory method. Seven right-handed persons were left-hemisphere dominant, and one left-handed and one ambidex-trous person were right-hemisphere dominant. Five of nine persons demonstrated larger lateralization indices with the auditory method than the visual method, while the remaining four showed larger lateralization indices with the visual method. No statistically significant difference was noted when comparing the results of the two methods(p>0.05). When determining hemispheric language dominance using functional MRI, the two methods are equally appropriate.

  17. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... amends telecommunications relay services (TRS) mandatory minimum standards applicable to Speech- to...

  18. Perturbation of the left inferior frontal gyrus triggers adaptive plasticity in the right homologous area during speech production

    DEFF Research Database (Denmark)

    Hartwigsen, Gesa; Saur, Dorothee; Price, Cathy J

    2013-01-01

    The role of the right hemisphere in aphasia recovery after left hemisphere damage remains unclear. Increased activation of the right hemisphere has been observed after left hemisphere damage. This may simply reflect a release from transcallosal inhibition that does not contribute to language...... hemisphere lesion. Our findings lend further support to the notion that increased activation of homologous right hemisphere areas supports aphasia recovery after left hemisphere damage....

  19. Coupled neural systems underlie the production and comprehension of naturalistic narrative speech.

    Science.gov (United States)

    Silbert, Lauren J; Honey, Christopher J; Simony, Erez; Poeppel, David; Hasson, Uri

    2014-10-28

    Neuroimaging studies of language have typically focused on either production or comprehension of single speech utterances such as syllables, words, or sentences. In this study we used a new approach to functional MRI acquisition and analysis to characterize the neural responses during production and comprehension of complex real-life speech. First, using a time-warp based intrasubject correlation method, we identified all areas that are reliably activated in the brains of speakers telling a 15-min-long narrative. Next, we identified areas that are reliably activated in the brains of listeners as they comprehended that same narrative. This allowed us to identify networks of brain regions specific to production and comprehension, as well as those that are shared between the two processes. The results indicate that production of a real-life narrative is not localized to the left hemisphere but recruits an extensive bilateral network, which overlaps extensively with the comprehension system. Moreover, by directly comparing the neural activity time courses during production and comprehension of the same narrative we were able to identify not only the spatial overlap of activity but also areas in which the neural activity is coupled across the speaker's and listener's brains during production and comprehension of the same narrative. We demonstrate widespread bilateral coupling between production- and comprehension-related processing within both linguistic and nonlinguistic areas, exposing the surprising extent of shared processes across the two systems.

  20. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  1. Dichotic listening as an index of lateralization of speech perception in familial risk children with and without dyslexia.

    Science.gov (United States)

    Hakvoort, Britt; van der Leij, Aryan; van Setten, Ellie; Maurits, Natasha; Maassen, Ben; van Zuijen, Titia

    2016-11-01

    Atypical language lateralization has been marked as one of the factors that may contribute to the development of dyslexia. Indeed, atypical lateralization of linguistic functions such as speech processing in dyslexia has been demonstrated using neuroimaging studies, but also using the behavioral dichotic listening (DL) method. However, so far, DL results have been mixed. The current study assesses lateralization of speech processing by using DL in a sample of children at familial risk (FR) for dyslexia. In order to determine whether atypical lateralization of speech processing relates to reading ability, or is a correlate of being at familial risk, the current study compares the laterality index of FR children who did and did not become dyslexic, and a control group of readers without dyslexia. DL was tested in 3rd grade and in 5/6th grade. Results indicate that at both time points, all three groups have a right ear advantage, indicative of more pronounced left-hemispheric processing. However, the FR-dyslexic children are less good at reporting from the left ear than controls and FR-nondyslexic children. This impediment relates to reading fluency. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Daws, Richard; Payne, Heather; Blott, Jonathan; Marshall, Chloë; MacSweeney, Mairéad

    2015-12-01

    Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Does the individual adaption of standardized speech paradigmas for clinical functional Magnetic Resonance Imaging (fMRI) effect the localization of the language-dominant hemisphere and of Broca's and Wernicke's areas

    International Nuclear Information System (INIS)

    Konrad, F.; Nennig, E.; Kress, B.; Sartor, K.; Stippich, C.; Ochmann, H.

    2005-01-01

    Purpose: Functional magnetic resonance imaging (fMRI) localizes Broca's area (B) and Wernicke's area (W) and the hemisphere dominant for language. In clinical fMRI, adapting the stimulation paradigms to each patient's individual cognitive capacity is crucial for diagnostic success. To interpret clinical fMRI findings correctly, we studied the effect of varying frequency and number of stimuli on functional localization, determination of language dominance and BOLD signals. Materials and Methods: Ten volunteers (VP) were investigated at 1.5 Tesla during visually triggered sentence generation using a standardized block design. In four different measurements, the stimuli were presented to each VP with frequencies of (1/1)s, (1/2)s,(1/3)s and (1/6)s. Results: The functional localizations and the correlations of the measured BOLD signals to the applied hemodynamic reference function (r) were almost independent from frequency and number of the stimuli in both hemispheres, whereas the relative BOLD signal changes (ΔS) in B and W increased with the stimulation rate, which also changed the lateralization indices. The strongest BOLD activations were achieved with the highest stimulation rate or with the maximum language production task, respectively. Conclusion: The adaptation of language paradigms necessary in clinical fMRI does not alter the functional localizations but changes the BOLD signals and language lateralization which should not be attributed to the underlying brain pathology. (orig.)

  4. [Does the individual adaptation of standardized speech paradigmas for clinical functional magnetic resonance imaging (fMRI) effect the localization of the language-dominant hemisphere and of Broca's and Wernicke's areas].

    Science.gov (United States)

    Konrad, F; Nennig, E; Ochmann, H; Kress, B; Sartor, K; Stippich, C

    2005-03-01

    Functional magnetic resonance imaging (fMRI) localizes Broca's area (B) and Wernicke's area (W) and the hemisphere dominant for language. In clinical fMRI, adapting the stimulation paradigms to each patient's individual cognitive capacity is crucial for diagnostic success. To interpret clinical fMRI findings correctly, we studied the effect of varying frequency and number of stimuli on functional localization, determination of language dominance and BOLD signals. Ten volunteers (VP) were investigated at 1.5 Tesla during visually triggered sentence generation using a standardized block design. In four different measurements, the stimuli were presented to each VP with frequencies of 1/1 s, (1/2) s, (1/3) s and (1/6) s. The functional localizations and the correlations of the measured BOLD signals to the applied hemodynamic reference function (r) were almost independent from frequency and number of the stimuli in both hemispheres, whereas the relative BOLD signal changes (DeltaS) in B and W increased with the stimulation rate, which also changed the lateralization indices. The strongest BOLD activations were achieved with the highest stimulation rate or with the maximum language production task, respectively. The adaptation of language paradigms necessary in clinical fMRI does not alter the functional localizations but changes the BOLD signals and language lateralization which should not be attributed to the underlying brain pathology.

  5. Domination, Eternal Domination, and Clique Covering

    Directory of Open Access Journals (Sweden)

    Klostermeyer William F.

    2015-05-01

    Full Text Available Eternal and m-eternal domination are concerned with using mobile guards to protect a graph against infinite sequences of attacks at vertices. Eternal domination allows one guard to move per attack, whereas more than one guard may move per attack in the m-eternal domination model. Inequality chains consisting of the domination, eternal domination, m-eternal domination, independence, and clique covering numbers of graph are explored in this paper.

  6. Further fMRI Validation of the Visual Half Field Technique as an Indicator of Language Laterality: A Large-Group Analysis

    Science.gov (United States)

    Van der Haegen, Lise; Cai, Qing; Seurinck, Ruth; Brysbaert, Marc

    2011-01-01

    The best established lateralized cerebral function is speech production, with the majority of the population having left hemisphere dominance. An important question is how to best assess the laterality of this function. Neuroimaging techniques such as functional Magnetic Resonance Imaging (fMRI) are increasingly used in clinical settings to…

  7. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    Directory of Open Access Journals (Sweden)

    Dhana Wolf

    2017-11-01

    Full Text Available Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake or less so (e.g., self-grooming. We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area and the posterior superior temporal gyrus (pSTG, Wernicke's area and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC in fMRI even without involving a stimulus (model-free analysis. The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations. Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.

  8. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    Science.gov (United States)

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension. PMID:29249945

  9. Internal modeling of upcoming speech: A causal role of the right posterior cerebellum in non-motor aspects of language production.

    Science.gov (United States)

    Runnqvist, Elin; Bonnard, Mireille; Gauvin, Hanna S; Attarian, Shahram; Trébuchon, Agnès; Hartsuiker, Robert J; Alario, F-Xavier

    2016-08-01

    Some language processing theories propose that, just as for other somatic actions, self-monitoring of language production is achieved through internal modeling. The cerebellum is the proposed center of such internal modeling in motor control, and the right cerebellum has been linked to an increasing number of language functions, including predictive processing during comprehension. Relating these findings, we tested whether the right posterior cerebellum has a causal role for self-monitoring of speech errors. Participants received 1 Hz repetitive transcranial magnetic stimulation during 15 min to lobules Crus I and II in the right hemisphere, and, in counterbalanced orders, to the contralateral area in the left cerebellar hemisphere (control) in order to induce a temporary inactivation of one of these zones. Immediately afterwards, they engaged in a speech production task priming the production of speech errors. Language production was impaired after right compared to left hemisphere stimulation, a finding that provides evidence for a causal role of the cerebellum during language production. We interpreted this role in terms of internal modeling of upcoming speech through a verbal working memory process used to prevent errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Standardization of Speech Corpus

    Directory of Open Access Journals (Sweden)

    Ai-jun Li

    2007-12-01

    Full Text Available Speech corpus is the basis for analyzing the characteristics of speech signals and developing speech synthesis and recognition systems. In China, almost all speech research and development affiliations are developing their own speech corpora. We have so many different kinds numbers of Chinese speech corpora that it is important to be able to conveniently share these speech corpora to avoid wasting time and money and to make research work more efficient. The primary goal of this research is to find a standard scheme which can make the corpus be established more efficiently and be used or shared more easily. A huge speech corpus on 10 regional accented Chinese, RASC863 (a Regional Accent Speech Corpus funded by National 863 Project will be exemplified to illuminate the standardization of speech corpus production.

  11. How to engage the right brain hemisphere in aphasics without even singing: evidence for two paths of speech recovery.

    Science.gov (United States)

    Stahl, Benjamin; Henseler, Ilona; Turner, Robert; Geyer, Stefan; Kotz, Sonja A

    2013-01-01

    There is an ongoing debate as to whether singing helps left-hemispheric stroke patients recover from non-fluent aphasia through stimulation of the right hemisphere. According to recent work, it may not be singing itself that aids speech production in non-fluent aphasic patients, but rhythm and lyric type. However, the long-term effects of melody and rhythm on speech recovery are largely unknown. In the current experiment, we tested 15 patients with chronic non-fluent aphasia who underwent either singing therapy, rhythmic therapy, or standard speech therapy. The experiment controlled for phonatory quality, vocal frequency variability, pitch accuracy, syllable duration, phonetic complexity and other influences, such as the acoustic setting and learning effects induced by the testing itself. The results provide the first evidence that singing and rhythmic speech may be similarly effective in the treatment of non-fluent aphasia. This finding may challenge the view that singing causes a transfer of language function from the left to the right hemisphere. Instead, both singing and rhythmic therapy patients made good progress in the production of common, formulaic phrases-known to be supported by right corticostriatal brain areas. This progress occurred at an early stage of both therapies and was stable over time. Conversely, patients receiving standard therapy made less progress in the production of formulaic phrases. They did, however, improve their production of non-formulaic speech, in contrast to singing and rhythmic therapy patients, who did not. In light of these results, it may be worth considering the combined use of standard therapy and the training of formulaic phrases, whether sung or rhythmically spoken. Standard therapy may engage, in particular, left perilesional brain regions, while training of formulaic phrases may open new ways of tapping into right-hemisphere language resources-even without singing.

  12. How to engage the right brain hemisphere in aphasics without even singing: evidence for two paths of speech recovery

    Directory of Open Access Journals (Sweden)

    Benjamin eStahl

    2013-02-01

    Full Text Available There is an ongoing debate as to whether singing helps left-hemispheric stroke patients recover from non-fluent aphasia through stimulation of the right hemisphere. According to recent work, it may not be singing itself that aids speech production in non-fluent aphasic patients, but rhythm and lyric type. However, the long-term effects of melody and rhythm on speech recovery are largely unknown. In the current experiment, we tested 15 patients with chronic non-fluent aphasia who underwent either singing therapy, rhythmic therapy, or standard speech therapy. The experiment controlled for phonatory quality, vocal frequency variability, pitch accuracy, syllable duration, phonetic complexity and other influences, such as the acoustic setting and learning effects induced by the testing itself. The results provide the first evidence that singing and rhythmic speech may be similarly effective in the treatment of non-fluent aphasia. This finding may challenge the view that singing causes a transfer of language function from the left to the right hemisphere. Instead, both singing and rhythmic therapy patients made good progress in the production of common, formulaic phrases—known to be supported by right corticostriatal brain areas. This progress occurred at an early stage of both therapies and was stable over time. Conversely, patients receiving standard therapy made less progress in the production of formulaic phrases. They did, however, improve their production of non-formulaic speech, in contrast to singing and rhythmic therapy patients, who did not. In light of these results, it may be worth considering the combined use of standard therapy and the training of formulaic phrases, whether sung or rhythmically spoken. Standard therapy may engage, in particular, left perilesional brain regions, while training of formulaic phrases may open new ways of tapping into right-hemisphere language resources—even without singing.

  13. Comparison between visual half-field performance and cerebral blood flow changes as indicators of language dominance.

    Science.gov (United States)

    Krach, S; Chen, L M; Hartje, W

    2006-03-01

    The determination of hemispheric language dominance (HLD) can be accomplished in two ways. One approach relies on hemispheric differences in cerebral blood flow velocity (CBFV) changes during language activity, while the other approach makes use of performance differences between the left and right visual field when verbal stimuli are presented in a tachistoscopic visual field paradigm. Since both methodologically different approaches claim to assess functional HLD, it seems plausible to expect that the respective laterality indices (LI) would correspond. To test this expectation we measured language lateralisation in 58 healthy right-handed, left-handed, and ambidextrous subjects with both approaches. CBFV changes were recorded with functional transcranial Doppler sonography (fTCD). We applied a lexical decision task with bilateral visual field presentation of abstract nouns and, in addition, a task of mental word generation. In the lexical decision task, a highly significant right visual field advantage was observed for number of correct responses and reaction times, while at the same time and contrary to expectation the increase of CBFV was significantly higher in the right than left hemisphere. During mental word generation, the acceleration of CBF was significantly higher in the left hemisphere. A comparison between individual LI derived from CBF measurement during mental word generation and from visual field performances in the lexical decision task showed a moderate correspondence in classifying the subjects' HLD. However, the correlation between the corresponding individual LI was surprisingly low and not significant. The results are discussed with regard to the issue of a limited reliability of behavioural LI on the one hand and the possibility of a fundamental difference between the behavioural and the physiological indicators of laterality on the other hand.

  14. Sex-linked dominant

    Science.gov (United States)

    Inheritance - sex-linked dominant; Genetics - sex-linked dominant; X-linked dominant; Y-linked dominant ... can be either an autosomal chromosome or a sex chromosome. It also depends on whether the trait ...

  15. Temporal modulations in speech and music.

    Science.gov (United States)

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Dominant preference and school readiness among grade 1 learners ...

    African Journals Online (AJOL)

    was a significant difference in the total reading score and reading comprehension scores between uni-dominant and cross- dominant children, with the former scoring higher. Associations have been found between handedness and developmental disorders of speech, language and reading, yet these associations are weak ...

  17. Speech and Language Developmental Milestones

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Speech and Language Developmental Milestones On this page: How do speech ... and language developmental milestones? How do speech and language develop? The first 3 years of life, when ...

  18. Delayed Speech or Language Development

    Science.gov (United States)

    ... Safe Videos for Educators Search English Español Delayed Speech or Language Development KidsHealth / For Parents / Delayed Speech ... their child is right on schedule. How Are Speech and Language Different? Speech is the verbal expression ...

  19. Domination versus disjunctive domination in graphs | Henning ...

    African Journals Online (AJOL)

    A dominating set in a graph G is a set S of vertices of G such that every vertex not in S is adjacent to a vertex of S. The domination number of G is the minimum cardinality of a dominating set of G. For a positive integer b, a set S of vertices in a graph G is a b-disjunctive dominating set in G if every vertex v not in S is adjacent ...

  20. Mu suppression as an index of sensorimotor contributions to speech processing: evidence from continuous EEG signals.

    Science.gov (United States)

    Cuellar, Megan; Bowers, Andrew; Harkrider, Ashley W; Wilson, Matthew; Saltuklaroglu, Tim

    2012-08-01

    Mu rhythm suppression is an index of sensorimotor activity during the processing of sensory stimuli. Two present studies investigate the extent to which this measure is sensitive to differences in acoustic processing. In both studies, participants were required to listen to 90second acoustic stimuli clips with their eyes closed and identify predetermined targets. Experimental conditions were designed to vary the acoustic processing demands. Mu suppression was measured continuously across central electrodes (C3, Cz, and C4). Ten adult females participated in the first study in which the target was a pseudoword presented in three conditions (identification, discrimination, discrimination in noise). Mu suppression was strongest and reached significance relative to baseline only in the discrimination in noise task at C3 (indicative of left hemisphere sensorimotor activity) when measured in a 10-12Hz bandwidth. Thirteen adult females participated in the second study, which measured mu suppression to acoustic stimuli with 'segmentation' (i.e., separating a parsed stimulus into individual components) versus non-segmentation requirements in both speech and tone discrimination conditions. Significantly greater overall suppression to speech relative to tone tasks was found in the 10-12Hz bandwidth. Further, suppression relative to baseline was significant only at C3 during the speech discrimination with segmentation task. Taken together, findings indicate that mu rhythm suppression in acoustic processing is sensitive to dorsal stream processing. More specifically, it is sensitive to (1) increases in overall processing demands and (2) processing linguistic versus non-linguistic information. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Use of Deixis in Donald Trump?s Campaign Speech

    OpenAIRE

    Hanim, Saidatul

    2017-01-01

    The aims of this study are (1) to find out the types of deixis in Donald Trump?s campaign speech, (2) to find out the reasons for the use of dominant type of deixis in Donald Trump?s campaign speech and (3) to find out whether or not the deixis is used appropriately in Donald Trump?s campaign speech. This research is conducted by using qualitative content analysis. The data of the study are the utterances from the script Donald Trump?s campaign speech. The data are analyzed by using Levinson ...

  2. Individual differences in speech imitation/pronunciation aptitude in late bilinguals: functional neuro-imaging and brain morphology

    Directory of Open Access Journals (Sweden)

    Susanne Maria Reiterer

    2011-10-01

    Full Text Available An unanswered question in adult language learning or late bi- and multilingualism is why individuals show marked differences in their ability to imitate foreign accents. While recent research acknowledges that more adults than previously assumed can still acquire a native foreign accent, very little is known about the neuro-cognitive correlates of this special ability. We investigated 140 German speaking individuals displaying varying degrees of mimicking capacity, based on natural language text, sentence and word imitations either in their second language English or in Hindi and Tamil, languages they had never been exposed to. The large subject pool was extensively controlled for previous language experience prior to magnetic resonance imaging (MRI. The late-onset (around 10 years bilinguals showed significant individual differences as to how they employed their left-hemisphere speech areas: higher hemodynamic activation in a distinct fronto-parietal network accompanied low ability, while high ability paralleled enhanced gray matter volume in these areas concomitant with decreased hemodynamic responses. Finally and unexpectedly, males were found to be more talented foreign speech mimics.

  3. Individual differences in audio-vocal speech imitation aptitude in late bilinguals: functional neuro-imaging and brain morphology.

    Science.gov (United States)

    Reiterer, Susanne Maria; Hu, Xiaochen; Erb, Michael; Rota, Giuseppina; Nardo, Davide; Grodd, Wolfgang; Winkler, Susanne; Ackermann, Hermann

    2011-01-01

    An unanswered question in adult language learning or late bi and multilingualism is why individuals show marked differences in their ability to imitate foreign accents. While recent research acknowledges that more adults than previously assumed can still acquire a "native" foreign accent, very little is known about the neuro-cognitive correlates of this special ability. We investigated 140 German-speaking individuals displaying varying degrees of "mimicking" capacity, based on natural language text, sentence, and word imitations either in their second language English or in Hindi and Tamil, languages they had never been exposed to. The large subject pool was strictly controlled for previous language experience prior to magnetic resonance imaging. The late-onset (around 10 years) bilinguals showed significant individual differences as to how they employed their left-hemisphere speech areas: higher hemodynamic activation in a distinct fronto-parietal network accompanied low ability, while high ability paralleled enhanced gray matter volume in these areas concomitant with decreased hemodynamic responses. Finally and unexpectedly, males were found to be more talented foreign speech mimics.

  4. Speech and Communication Disorders

    Science.gov (United States)

    ... Speech problems like stuttering Developmental disabilities Learning disorders Autism spectrum disorder Brain injury Stroke Some speech and communication problems may be genetic. Often, no one knows the causes. By first grade, about 5 percent of children ...

  5. Speech disorders - children

    Science.gov (United States)

    ... after age 4 (I want...I want my doll. I...I see you.) Putting in (interjecting) extra ... may outgrow milder forms of speech disorders. Speech therapy may help with more severe symptoms or any ...

  6. Ultra-fast speech comprehension in blind subjects engages primary visual cortex, fusiform gyrus, and pulvinar – a functional magnetic resonance imaging (fMRI) study

    Science.gov (United States)

    2013-01-01

    Background Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate. Results Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv). Conclusions Presumably, FG supports the left-hemispheric perisylvian “language network”, i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian “language zones”, might facilitate – under time-critical conditions – the consolidation of linguistic information at the level of verbal working memory. PMID:23879896

  7. Functional connectivity in the dorsal stream and between bilateral auditory-related cortical areas differentially contribute to speech decoding depending on spectro-temporal signal integrity and performance.

    Science.gov (United States)

    Elmer, Stefan; Kühnis, Jürg; Rauch, Piyush; Abolfazl Valizadeh, Seyed; Jäncke, Lutz

    2017-11-01

    Speech processing relies on the interdependence between auditory perception, sensorimotor integration, and verbal memory functions. Functional and structural connectivity between bilateral auditory-related cortical areas (ARCAs) facilitates spectro-temporal analyses, whereas the dynamic interplay between ARCAs and Broca's area (i.e., dorsal pathway) contributes to verbal memory functions, articulation, and sound-to-motor mapping. However, it remains unclear whether these two neural circuits are preferentially driven by spectral or temporal acoustic information, and whether their recruitment is predictive of speech perception performance and learning. Therefore, we evaluated EEG-based intracranial (eLORETA) functional connectivity (lagged coherence) in both pathways (i.e., between bilateral ARCAs and in the dorsal stream) while good- (GPs, N = 12) and poor performers (PPs, N = 13) learned to decode natural pseudowords (CLEAN) or comparable items (speech-noise chimeras) manipulated in the envelope (ENV) or in the fine-structure (FS). Learning to decode degraded speech was generally associated with increased functional connectivity in the theta, alpha, and beta frequency range in both circuits. Furthermore, GPs exhibited increased connectivity in the left dorsal stream compared to PPs, but only during the FS condition and in the theta frequency band. These results suggest that both pathways contribute to the decoding of spectro-temporal degraded speech by increasing the communication between brain regions involved in perceptual analyses and verbal memory functions. Otherwise, the left-hemispheric recruitment of the dorsal stream in GPs during the FS condition points to a contribution of this pathway to articulatory-based memory processes that are dependent on the temporal integrity of the speech signal. These results enable to better comprehend the neural circuits underlying word-learning as a function of temporal and spectral signal integrity and performance

  8. Dominant and opponent relations in cortical function: An EEG study of exam performance and stress

    Directory of Open Access Journals (Sweden)

    Lucia P. Pavlova

    2017-12-01

    Full Text Available This paper analyzes the opponent dynamics of human motivational and affective processes, as conceptualized by RS Solomon, from the position of AA Ukhtomsky’s neurophysiological principle of the dominant and its applications in the field of human electroencephalographic analysis. As an experimental model, we investigate the dynamics of cortical activity in students submitting university final course oral examinations in naturalistic settings, and show that successful performance in these settings depends on the presence of specific types of cortical activation patterns, involving high indices of left-hemispheric and frontal cortical dominance, whereas the lack thereof predicts poor performance on the task, and seems to be associated with difficulties in the executive regulation of cognitive (intellectual and motivational processes in these highly demanding and stressful conditions. Based on such knowledge, improved educational and therapeutic interventions can be suggested which take into account individual variability in the neurocognitive mechanisms underlying adaptation to motivationally and intellectually challenging, stressful tasks, such as oral university exams. Some implications of this research for opponent-process theory and its closer integration into current neuroscience research on acquired motivations are discussed.

  9. Surgical speech disorders.

    Science.gov (United States)

    Shen, Tianjie; Sie, Kathleen C Y

    2014-11-01

    Most speech disorders of childhood are treated with speech therapy. However, two conditions, ankyloglossia and velopharyngeal dysfunction, may be amenable to surgical intervention. It is important for surgeons to work with experienced speech language pathologists to diagnose the speech disorder. Children with articulation disorders related to ankyloglossia may benefit from frenuloplasty. Children with velopharyngeal dysfunction should have standardized clinical evaluation and instrumental asseessment of velopharyngeal function. Surgeons should develop a treatment protocol to optimize speech outcomes while minimizing morbidity. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Machine Translation from Speech

    Science.gov (United States)

    Schwartz, Richard; Olive, Joseph; McCary, John; Christianson, Caitlin

    This chapter describes approaches for translation from speech. Translation from speech presents two new issues. First, of course, we must recognize the speech in the source language. Although speech recognition has improved considerably over the last three decades, it is still far from being a solved problem. In the best of conditions, when the speech comes from high quality, carefully enunciated speech, on common topics (such as speech read by a trained news broadcaster), the word error rate is typically on the order of 5%. Humans can typically transcribe speech like this with less than 1% disagreement between annotators, so even this best number is still far worse than human performance. However, the task gets much harder when anything changes from this ideal condition. Some of the conditions that cause higher error rate are, if the topic is somewhat unusual, or the speakers are not reading so that their speech is more spontaneous, or if the speakers have an accent or are speaking a dialect, or if there is any acoustic degradation, such as noise or reverberation. In these cases, the word error can increase significantly to 20%, 30%, or higher. Accordingly, most of this chapter discusses techniques for improving speech recognition accuracy, while one section discusses techniques for integrating speech recognition with translation.

  11. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  12. Managing the reaction effects of speech disorders on speech ...

    African Journals Online (AJOL)

    ... persons having speech disorders. Speech disorders must be treated so that speech defectives will be helped out of their speech problems and be prevented from becoming obsessed by frustrations resulting from their speech disorders. African Journal of Cross-Cultural Psychology and Sport Facilitation Vol. 6 2004: 91-95 ...

  13. Isolate domination in graphs

    Directory of Open Access Journals (Sweden)

    I. Sahul Hamid

    2016-07-01

    Full Text Available A set D of vertices of a graph G is called a dominating set of G if every vertex in V(G−D is adjacent to a vertex in D. A dominating set S such that the subgraph 〈S〉 induced by S has at least one isolated vertex is called an isolate dominating set. An isolate dominating set none of whose proper subset is an isolate dominating set is a minimal isolate dominating set. The minimum and maximum cardinality of a minimal isolate dominating set are called the isolate domination number γ0 and the upper isolate domination number Γ0 respectively. In this paper we initiate a study on these parameters.

  14. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  15. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  16. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  17. Topics on domination

    CERN Document Server

    Hedetniemi, ST

    1991-01-01

    The contributions in this volume are divided into three sections: theoretical, new models and algorithmic. The first section focuses on properties of the standard domination number &ggr;(G), the second section is concerned with new variations on the domination theme, and the third is primarily concerned with finding classes of graphs for which the domination number (and several other domination-related parameters) can be computed in polynomial time.

  18. Total well dominated trees

    DEFF Research Database (Denmark)

    Finbow, Arthur; Frendrup, Allan; Vestergaard, Preben D.

    cardinality then G is a total well dominated graph. In this paper we study composition and decomposition of total well dominated trees. By a reversible process we prove that any total well dominated tree can both be reduced to and constructed from a family of three small trees....

  19. Dominating Sets and Domination Polynomials of Paths

    Directory of Open Access Journals (Sweden)

    Saeid Alikhani

    2009-01-01

    Full Text Available Let G=(V,E be a simple graph. A set S⊆V is a dominating set of G, if every vertex in V\\S is adjacent to at least one vertex in S. Let 𝒫ni be the family of all dominating sets of a path Pn with cardinality i, and let d(Pn,j=|𝒫nj|. In this paper, we construct 𝒫ni, and obtain a recursive formula for d(Pn,i. Using this recursive formula, we consider the polynomial D(Pn,x=∑i=⌈n/3⌉nd(Pn,ixi, which we call domination polynomial of paths and obtain some properties of this polynomial.

  20. Advertising and Free Speech.

    Science.gov (United States)

    Hyman, Allen, Ed.; Johnson, M. Bruce, Ed.

    The articles collected in this book originated at a conference at which legal and economic scholars discussed the issue of First Amendment protection for commercial speech. The first article, in arguing for freedom for commercial speech, finds inconsistent and untenable the arguments of those who advocate freedom from regulation for political…

  1. Physics and Speech Therapy.

    Science.gov (United States)

    Duckworth, M.; Lowe, T. L.

    1986-01-01

    Describes development and content of a speech science course taught to speech therapists for two years, modified by feedback from those two classes. Presents basic topics and concepts covered. Evaluates a team teaching approach as well as the efficacy of teaching physics relevant to vocational interests. (JM)

  2. Illustrated Speech Anatomy.

    Science.gov (United States)

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  3. Speech Quality Measurement

    Science.gov (United States)

    1978-05-01

    2.271 Sound Patterns of English, N. Chomsky and H. Halle, Haper zi Row, New York, 1968. 12.281 "Speech Synthesis by Rule," J. N. Holmes, I. G...L. H. Nakatani, B. J. McDermott, "Effect of Pitch and Formant Manipulations on Speech Quality," Bell Telephone Laboratories, Technical Memorandum, 72

  4. Speech and Language Impairments

    Science.gov (United States)

    ... grade and has recently been diagnosed with childhood apraxia of speech—or CAS. CAS is a speech disorder marked ... 800.242.5338 | http://www.cleftline.org Childhood Apraxia of Speech Association of North America | CASANA http://www.apraxia- ...

  5. Free Speech. No. 38.

    Science.gov (United States)

    Kane, Peter E., Ed.

    This issue of "Free Speech" contains the following articles: "Daniel Schoor Relieved of Reporting Duties" by Laurence Stern, "The Sellout at CBS" by Michael Harrington, "Defending Dan Schorr" by Tome Wicker, "Speech to the Washington Press Club, February 25, 1976" by Daniel Schorr, "Funds…

  6. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  7. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  8. A case of crossed aphasia with apraxia of speech

    Directory of Open Access Journals (Sweden)

    Yogesh Patidar

    2013-01-01

    Full Text Available Apraxia of speech (AOS is a rare, but well-defined motor speech disorder. It is characterized by irregular articulatory errors, attempts of self-correction and persistent prosodic abnormalities. Similar to aphasia, AOS is also localized to the dominant cerebral hemisphere. We report a case of Crossed Aphasia with AOS in a 48-year-old right-handed man due to an ischemic infarct in right cerebral hemisphere.

  9. On the Relationship between Right- brain and Left- brain Dominance and Reading Comprehension Test Performance of Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Hassan Soleimani

    2012-05-01

    Full Text Available A tremendous amount of works have been conducted by psycholinguistics to identify hemisphere processing during second/ foreign language learning, or in other words to investigate the role of the brain hemisphere dominance in language performance of learners. Most of these researches have focused on single words and word pairs (e.g., Anaki et al., 1998; Arzouan et. al., 2007; Faust & Mahal, 2007 or simple sentences (Rapp et al., 2007; Kacinik & Chiarello, 2007, and it has been discovered that there is an advantage of right hemisphere for metaphors and an
    advantage of left hemisphere for literal text. But the present research was designed to study Iranian EFL learners' performance in different reading tasks, so there could be differences between the consequences of the former research and the results of the present study due to the context. Here left-brain and right-brain dominance was investigated in 60 individuals (20 right-handed and 10 left-handed male, 20 right-handed and 10 left-handed female via the Edinburg Handedness Questionnaire (EHQ. The research results suggested that the right-handed learners who are supposed to be left-brain outperformed the left-handed ones; and regarding participant's gender, male learners outperformed female learners on reading comprehension test tasks.

  10. An optimal speech processor for efficient human speech ...

    Indian Academy of Sciences (India)

    Our experimental findings suggest that the auditory filterbank in human ear functions as a near-optimal speech processor for achieving efficient speech communication between humans. Keywords. Human speech communication; articulatory gestures; auditory filterbank; mutual information. 1. Introduction. Speech is one of ...

  11. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  12. Cholinergic Potentiation and Audiovisual Repetition-Imitation Therapy Improve Speech Production and Communication Deficits in a Person with Crossed Aphasia by Inducing Structural Plasticity in White Matter Tracts.

    Science.gov (United States)

    Berthier, Marcelo L; De-Torres, Irene; Paredes-Pacheco, José; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María J; Alfaro, Francisco; Moreno-Torres, Ignacio; López-Barroso, Diana; Dávila, Guadalupe

    2017-01-01

    Donepezil (DP), a cognitive-enhancing drug targeting the cholinergic system, combined with massed sentence repetition training augmented and speeded up recovery of speech production deficits in patients with chronic conduction aphasia and extensive left hemisphere infarctions (Berthier et al., 2014). Nevertheless, a still unsettled question is whether such improvements correlate with restorative structural changes in gray matter and white matter pathways mediating speech production. In the present study, we used pharmacological magnetic resonance imaging to study treatment-induced brain changes in gray matter and white matter tracts in a right-handed male with chronic conduction aphasia and a right subcortical lesion (crossed aphasia). A single-patient, open-label multiple-baseline design incorporating two different treatments and two post-treatment evaluations was used. The patient received an initial dose of DP (5 mg/day) which was maintained during 4 weeks and then titrated up to 10 mg/day and administered alone (without aphasia therapy) during 8 weeks (Endpoint 1). Thereafter, the drug was combined with an audiovisual repetition-imitation therapy (Look-Listen-Repeat, LLR) during 3 months (Endpoint 2). Language evaluations, diffusion weighted imaging (DWI), and voxel-based morphometry (VBM) were performed at baseline and at both endpoints in JAM and once in 21 healthy control males. Treatment with DP alone and combined with LLR therapy induced marked improvement in aphasia and communication deficits as well as in selected measures of connected speech production, and phrase repetition. The obtained gains in speech production remained well-above baseline scores even 4 months after ending combined therapy. Longitudinal DWI showed structural plasticity in the right frontal aslant tract and direct segment of the arcuate fasciculus with both interventions. VBM revealed no structural changes in other white matter tracts nor in cortical areas linked by these tracts. In

  13. Speech Communication and Signal Processing

    Indian Academy of Sciences (India)

    on 'Auditory-like filter bank: An optimal speech processor for efficient human speech commu- nication', Ghosh et al argue that the auditory filter bank in human ear is a near-optimal speech processor for efficient speech communication between human beings. They use mutual informa- tion criterion to design the optimal filter ...

  14. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  15. Speech processing in mobile environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book focuses on speech processing in the presence of low-bit rate coding and varying background environments. The methods presented in the book exploit the speech events which are robust in noisy environments. Accurate estimation of these crucial events will be useful for carrying out various speech tasks such as speech recognition, speaker recognition and speech rate modification in mobile environments. The authors provide insights into designing and developing robust methods to process the speech in mobile environments. Covering temporal and spectral enhancement methods to minimize the effect of noise and examining methods and models on speech and speaker recognition applications in mobile environments.

  16. A New Fuzzy Cognitive Map Learning Algorithm for Speech Emotion Recognition

    OpenAIRE

    Zhang, Wei; Zhang, Xueying; Sun, Ying

    2017-01-01

    Selecting an appropriate recognition method is crucial in speech emotion recognition applications. However, the current methods do not consider the relationship between emotions. Thus, in this study, a speech emotion recognition system based on the fuzzy cognitive map (FCM) approach is constructed. Moreover, a new FCM learning algorithm for speech emotion recognition is proposed. This algorithm includes the use of the pleasure-arousal-dominance emotion scale to calculate the weights between e...

  17. Atypical cerebral language dominance in a right-handed patient: An anatomoclinical study.

    Science.gov (United States)

    De Witte, Elke; Van Hecke, Wim; Dua, Guido; De Surgeloose, Didier; Moens, Maarten; Mariën, Peter

    2014-02-01

    Approximately 97% of the right-handers has left hemisphere language dominance. Within the language dominant hemisphere Broca's area is of crucial importance for a variety of linguistic functions. As a result, tumour resection in and around Broca's area is controversial. However, studies showed that by means of Direct Electrical Stimulation (DES) tumour resection in this region can be safely performed. We report unexpected anatomoclinical findings in a right-handed patient who underwent tumour resection in the left prefrontal lobe. Language functions in this right-handed patient were extensively examined in the pre-, intra-, and postoperative phase by means of a standardised battery of neurolinguistic and neurocognitive tests. Results obtained in the pre- and postoperative phase are compared. In addition, intraoperative DES findings and postoperative functional Magnetic Resonance Imaging (fMRI) and Diffusion Tensor Imaging (DTI) results are reported. Tumour resection near Broca's area was safely performed since no positive language sites were found during intraoperative DES. Since no linguistic deficits occurred in the pre-, intra-, or postoperative phase, atypical language dominance was suspected. Neuropsychological investigations, however, disclosed permanent executive dysfunction. Postoperative fMRI and DTI confirmed right cerebral language dominance as well as a crossed cerebro-cerebellar functional link with the left cerebellar hemisphere. Atypical right hemisphere language dominance in this right-handed patient is reflected by: (1) the total absence of language problems in the pre-, intra- and postoperative phase, (2) absence of positive stimulation sites during DES, (3) a clearly more pronounced arcuate fasciculus in the right cerebral hemisphere (DTI), (4) a crossed functional connection between the right cerebrum and the left cerebellum (fMRI). Two hypothetical explanations for the pattern of crossed cerebral language dominance are put forward: (1

  18. Speech and Swallowing

    Science.gov (United States)

    ... Hallucinations/Delusions Pain Skeletal & Bone Health Skin Changes Sleep Disorders Small Handwriting Speech & Swallowing Problems Urinary Incontinence Vision Changes Weight Management Diagnosis Treatment Help Us Make a Difference We need your ...

  19. Anxiety and ritualized speech

    Science.gov (United States)

    Lalljee, Mansur; Cook, Mark

    1975-01-01

    The experiment examines the effects on a number of words that seem irrelevant to semantic communication. The Units of Ritualized Speech (URSs) considered are: 'I mean', 'in fact', 'really', 'sort of', 'well' and 'you know'. (Editor)

  20. Speech impairment (adult)

    Science.gov (United States)

    ... Elsevier; 2016:chap 13. Kirshner HS. Dysarthria and apraxia of speech. In: Daroff RB, Jankovic J, Mazziotta JC, Pomeroy SL, eds. Bradley's Neurology in Clinical Practice . 7th ed. Philadelphia, PA: Elsevier; 2016: ...

  1. Trainable Videorealistic Speech Animation

    National Research Council Canada - National Science Library

    Ezzat, Tony F

    2002-01-01

    .... After processing the corpus automatically, a visual speech module is learned from the data that is capable of synthesizing the human subject's mouth littering entirely novel utterances that were not...

  2. Speech perception as categorization.

    Science.gov (United States)

    Holt, Lori L; Lotto, Andrew J

    2010-07-01

    Speech perception (SP) most commonly refers to the perceptual mapping from the highly variable acoustic speech signal to a linguistic representation, whether it be phonemes, diphones, syllables, or words. This is an example of categorization, in that potentially discriminable speech sounds are assigned to functionally equivalent classes. In this tutorial, we present some of the main challenges to our understanding of the categorization of speech sounds and the conceptualization of SP that has resulted from these challenges. We focus here on issues and experiments that define open research questions relevant to phoneme categorization, arguing that SP is best understood as perceptual categorization, a position that places SP in direct contact with research from other areas of perception and cognition.

  3. Charisma in business speeches

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Brem, Alexander; Novák-Tót, Eszter

    2016-01-01

    to business speeches. Consistent with the public opinion, our findings are indicative of Steve Jobs being a more charismatic speaker than Mark Zuckerberg. Beyond previous studies, our data suggest that rhythm and emphatic accentuation are also involved in conveying charisma. Furthermore, the differences...... between Steve Jobs and Mark Zuckerberg and the investor- and customer-related sections of their speeches support the modern understanding of charisma as a gradual, multiparametric, and context-sensitive concept....

  4. Speech spectrum envelope modeling

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Vondra, Martin

    Vol. 4775, - (2007), s. 129-137 ISSN 0302-9743. [COST Action 2102 International Workshop. Vietri sul Mare, 29.03.2007-31.03.2007] R&D Projects: GA AV ČR(CZ) 1ET301710509 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech * speech processing * cepstral analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.302, year: 2005

  5. The Disfluent Speech of Bilingual Spanish-English Children: Considerations for Differential Diagnosis of Stuttering

    Science.gov (United States)

    Byrd, Courtney T.; Bedore, Lisa M.; Ramos, Daniel

    2015-01-01

    Purpose: The primary purpose of this study was to describe the frequency and types of speech disfluencies that are produced by bilingual Spanish-English (SE) speaking children who do not stutter. The secondary purpose was to determine whether their disfluent speech is mediated by language dominance and/or language produced. Method: Spanish and…

  6. Memory for speech and speech for memory.

    Science.gov (United States)

    Locke, J L; Kutz, K J

    1975-03-01

    Thirty kindergarteners, 15 who substituted /w/ for /r/ and 15 with correct articulation, received two perception tests and a memory test that included /w/ and /r/ in minimally contrastive syllables. Although both groups had nearly perfect perception of the experimenter's productions of /w/ and /r/, misarticulating subjects perceived their own tape-recorded w/r productions as /w/. In the memory task these same misarticulating subjects committed significantly more /w/-/r/ confusions in unspoken recall. The discussion considers why people subvocally rehearse; a developmental period in which children do not rehearse; ways subvocalization may aid recall, including motor and acoustic encoding; an echoic store that provides additional recall support if subjects rehearse vocally, and perception of self- and other- produced phonemes by misarticulating children-including its relevance to a motor theory of perception. Evidence is presented that speech for memory can be sufficiently impaired to cause memory disorder. Conceptions that restrict speech disorder to an impairment of communication are challenged.

  7. Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning).

    Science.gov (United States)

    Campbell, R; MacSweeney, M; Surguladze, S; Calvert, G; McGuire, P; Suckling, J; Brammer, M J; David, A S

    2001-10-01

    Can the cortical substrates for the perception of face actions be distinguished when the superficial visual qualities of these actions are very similar? Two fMRI experiments are reported. Compared with watching the face at rest, observing silent speech was associated with bilateral activation in a number of temporal cortical regions, including the superior temporal sulcus (STS). Watching face movements of similar extent and duration, but which could not be construed as speech (gurning; Experiment 1b) was not associated with activation of superior temporal cortex to the same extent, especially in the left hemisphere. Instead, the peak focus of the largest cluster of activation was in the posterior part of the inferior temporal gyrus (right, BA 37). Observing silent speech, but not gurning faces, was also associated with bilateral activation of inferior frontal cortex (BA 44 and 45). In a second study, speechreading and observing gurning faces were compared within a single experiment, using stimuli which comprised the speaker's face and torso (and hence a much smaller image of the speaker's face and facial actions). There was again differential engagement of superior temporal cortex which followed the pattern of Experiment 1. These findings suggest that superior temporal gyrus and neighbouring regions are activated bilaterally when subjects view face actions--at different scales--that can be interpreted as speech. This circuitry is not accessed to the same extent by visually similar, but linguistically meaningless actions. However, some temporal regions, such as the posterior part of the right superior temporal sulcus, appear to be common processing sites for processing both seen speech and gurns.

  8. Resourcing speech-language pathologists to work with multilingual children.

    Science.gov (United States)

    McLeod, Sharynne

    2014-06-01

    Speech-language pathologists play important roles in supporting people to be competent communicators in the languages of their communities. However, with over 7000 languages spoken throughout the world and the majority of the global population being multilingual, there is often a mismatch between the languages spoken by children and families and their speech-language pathologists. This paper provides insights into service provision for multilingual children within an English-dominant country by viewing Australia's multilingual population as a microcosm of ethnolinguistic minorities. Recent population studies of Australian pre-school children show that their most common languages other than English are: Arabic, Cantonese, Vietnamese, Italian, Mandarin, Spanish, and Greek. Although 20.2% of services by Speech Pathology Australia members are offered in languages other than English, there is a mismatch between the language of the services and the languages of children within similar geographical communities. Australian speech-language pathologists typically use informal or English-based assessments and intervention tools with multilingual children. Thus, there is a need for accessible culturally and linguistically appropriate resources for working with multilingual children. Recent international collaborations have resulted in practical strategies to support speech-language pathologists during assessment, intervention, and collaboration with families, communities, and other professionals. The International Expert Panel on Multilingual Children's Speech was assembled to prepare a position paper to address issues faced by speech-language pathologists when working with multilingual populations. The Multilingual Children's Speech website ( http://www.csu.edu.au/research/multilingual-speech ) addresses one of the aims of the position paper by providing free resources and information for speech-language pathologists about more than 45 languages. These international

  9. Computer-based speech therapy for childhood speech sound disorders.

    Science.gov (United States)

    Furlong, Lisa; Erickson, Shane; Morris, Meg E

    2017-07-01

    With the current worldwide workforce shortage of Speech-Language Pathologists, new and innovative ways of delivering therapy to children with speech sound disorders are needed. Computer-based speech therapy may be an effective and viable means of addressing service access issues for children with speech sound disorders. To evaluate the efficacy of computer-based speech therapy programs for children with speech sound disorders. Studies reporting the efficacy of computer-based speech therapy programs were identified via a systematic, computerised database search. Key study characteristics, results, main findings and details of computer-based speech therapy programs were extracted. The methodological quality was evaluated using a structured critical appraisal tool. 14 studies were identified and a total of 11 computer-based speech therapy programs were evaluated. The results showed that computer-based speech therapy is associated with positive clinical changes for some children with speech sound disorders. There is a need for collaborative research between computer engineers and clinicians, particularly during the design and development of computer-based speech therapy programs. Evaluation using rigorous experimental designs is required to understand the benefits of computer-based speech therapy. The reader will be able to 1) discuss how computerbased speech therapy has the potential to improve service access for children with speech sound disorders, 2) explain the ways in which computer-based speech therapy programs may enhance traditional tabletop therapy and 3) compare the features of computer-based speech therapy programs designed for different client populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Speech Act Classification of German Advertising Texts

    Directory of Open Access Journals (Sweden)

    Артур Нарманович Мамедов

    2015-12-01

    Full Text Available This paper uses the theory of speech acts and the underlying concept of pragmalinguistics to determine the types of speech acts and their classification in the German advertising printed texts. We ascertain that the advertising of cars and accessories, household appliances and computer equipment, watches, fancy goods, food, pharmaceuticals, and financial, insurance, legal services and also airline advertising is dominated by a pragmatic principle, which is based on demonstrating information about the benefits of a product / service. This influences the frequent usage of certain speech acts. The dominant form of exposure is to inform the recipient-user about the characteristics of the advertised product. This information is fore-grounded by means of stylistic and syntactic constructions specific to the advertisement (participial constructions, appositional constructions which contribute to emphasize certain notional components within the framework of the advertising text. Stylistic and syntactic devices of reduction (parceling constructions convey the author's idea. Other means like repetitions, enumerations etc are used by the advertiser to strengthen his selling power. The advertiser focuses the attention of the consumer on the characteristics of the product seeking to convince him of the utility of the product and to influence his/ her buying behavior.

  11. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  12. Allograph errors and impaired access to graphic motor codes in a case of unilateral agraphia of the dominant left hand.

    Science.gov (United States)

    Hanley, J R; Peters, S

    2001-06-01

    This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.

  13. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, A.; Moses, H. R.

    2016-01-01

    Currently on the International Space Station (ISS) and other space vehicles Caution & Warning (C&W) alerts are represented with various auditory tones that correspond to the type of event. This system relies on the crew's ability to remember what each tone represents in a high stress, high workload environment when responding to the alert. Furthermore, crew receive a year or more in advance of the mission that makes remembering the semantic meaning of the alerts more difficult. The current system works for missions conducted close to Earth where ground operators can assist as needed. On long duration missions, however, they will need to work off-nominal events autonomously. There is evidence that speech alarms may be easier and faster to recognize, especially during an off-nominal event. The Information Presentation Directed Research Project (FY07-FY09) funded by the Human Research Program included several studies investigating C&W alerts. The studies evaluated tone alerts currently in use with NASA flight deck displays along with candidate speech alerts. A follow-on study used four types of speech alerts to investigate how quickly various types of auditory alerts with and without a speech component - either at the beginning or at the end of the tone - can be identified. Even though crew were familiar with the tone alert from training or direct mission experience, alerts starting with a speech component were identified faster than alerts starting with a tone. The current study replicated the results from the previous study in a more rigorous experimental design to determine if the candidate speech alarms are ready for transition to operations or if more research is needed. Four types of alarms (caution, warning, fire, and depressurization) were presented to participants in both tone and speech formats in laboratory settings and later in the Human Exploration Research Analog (HERA). In the laboratory study, the alerts were presented by software and participants were

  14. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  15. Automated speech understanding: the next generation

    Science.gov (United States)

    Picone, J.; Ebel, W. J.; Deshmukh, N.

    1995-04-01

    Modern speech understanding systems merge interdisciplinary technologies from Signal Processing, Pattern Recognition, Natural Language, and Linguistics into a unified statistical framework. These systems, which have applications in a wide range of signal processing problems, represent a revolution in Digital Signal Processing (DSP). Once a field dominated by vector-oriented processors and linear algebra-based mathematics, the current generation of DSP-based systems rely on sophisticated statistical models implemented using a complex software paradigm. Such systems are now capable of understanding continuous speech input for vocabularies of several thousand words in operational environments. The current generation of deployed systems, based on small vocabularies of isolated words, will soon be replaced by a new technology offering natural language access to vast information resources such as the Internet, and provide completely automated voice interfaces for mundane tasks such as travel planning and directory assistance.

  16. Why Go to Speech Therapy?

    Science.gov (United States)

    ... Language Pathologists Physicians Employers Tweet Why Go To Speech Therapy? Parents of Preschoolers Parents of School-Age ... amount of success to be expected. Choosing a Speech-Language Pathologist The key to success with any ...

  17. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  18. Maria Montessori on Speech Education

    Science.gov (United States)

    Stern, David A.

    1973-01-01

    Montessori's theory of education, as related to speech communication skills learning, is explored for insights into speech and language acquisition, pedagogical procedure for teaching spoken vocabulary, and the educational environment which encourages children's free interaction and confidence in communication. (CH)

  19. Speech spectrogram expert

    Energy Technology Data Exchange (ETDEWEB)

    Johannsen, J.; Macallister, J.; Michalek, T.; Ross, S.

    1983-01-01

    Various authors have pointed out that humans can become quite adept at deriving phonetic transcriptions from speech spectrograms (as good as 90percent accuracy at the phoneme level). The authors describe an expert system which attempts to simulate this performance. The speech spectrogram expert (spex) is actually a society made up of three experts: a 2-dimensional vision expert, an acoustic-phonetic expert, and a phonetics expert. The visual reasoning expert finds important visual features of the spectrogram. The acoustic-phonetic expert reasons about how visual features relates to phonemes, and about how phonemes change visually in different contexts. The phonetics expert reasons about allowable phoneme sequences and transformations, and deduces an english spelling for phoneme strings. The speech spectrogram expert is highly interactive, allowing users to investigate hypotheses and edit rules. 10 references.

  20. RECOGNISING SPEECH ACTS

    Directory of Open Access Journals (Sweden)

    Phyllis Kaburise

    2012-09-01

    Full Text Available Speech Act Theory (SAT, a theory in pragmatics, is an attempt to describe what happens during linguistic interactions. Inherent within SAT is the idea that language forms and intentions are relatively formulaic and that there is a direct correspondence between sentence forms (for example, in terms of structure and lexicon and the function or meaning of an utterance. The contention offered in this paper is that when such a correspondence does not exist, as in indirect speech utterances, this creates challenges for English second language speakers and may result in miscommunication. This arises because indirect speech acts allow speakers to employ various pragmatic devices such as inference, implicature, presuppositions and context clues to transmit their messages. Such devices, operating within the non-literal level of language competence, may pose challenges for ESL learners.

  1. EVOLUTION OF SPEECH: A NEW HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    Shishir

    2016-03-01

    Full Text Available BACKGROUND The first and foremost characteristic of speech is that it is human. Speech is one characteristic feature that has evolved in humans and is by far the most powerful form of communication in the Kingdom Animalia. Today, human has established himself as an alpha species and speech and language evolution has made it possible. But how is speech possible? What anatomical changes have made us possible to speak? A sincere effort has been put in this paper to establish a possible anatomical answer to the riddle. METHODS The prototypes of the cranial skeletons of all the major classes of phylum vertebrata were studied. The materials were studied in museums of Wayanad, Karwar and Museum of Natural History, Imphal. The skeleton of mammal was studied in the Department of Anatomy, K. S. Hegde Medical Academy, Mangalore. RESULTS The curve formed in the base of the skull due to flexion of the splanchnocranium with the neurocranium holds the key to answer of how humans were able to speak. CONCLUSION Of course this may not be the only reason which participated in the evolution of speech like the brain also had to evolve and as a matter of fact the occipital lobes are more prominent in humans when compared to that of the lower mammals. Although, not the only criteria but it is one of the most important thing that has happened in the course of evolution and made us to speak. This small space at the base of the brain is the difference which made us the dominant alpha species.

  2. Tool-Use and the Left Hemisphere: What Is Lost in Ideomotor Apraxia?

    Science.gov (United States)

    Sunderland, Alan; Wilkins, Leigh; Dineen, Rob; Dawson, Sophie E.

    2013-01-01

    Impaired tool related action in ideomotor apraxia is normally ascribed to loss of sensorimotor memories for habitual actions (engrams), but this account has not been tested against a hypothesis of a general deficit in representation of hand-object spatial relationships. Rapid reaching for familiar tools was compared with reaching for abstract…

  3. Psychological Correlates of Handedness and Corpus Callosum Asymmetry in Autism: The Left Hemisphere Dysfunction Theory Revisited

    Science.gov (United States)

    Floris, Dorothea L.; Chura, Lindsay R.; Holt, Rosemary J.; Suckling, John; Bullmore, Edward T.; Baron-Cohen, Simon; Spencer, Michael D.

    2013-01-01

    Rightward cerebral lateralization has been suggested to be involved in the neuropathology of autism spectrum conditions. We investigated functional and neuroanatomical asymmetry, in terms of handedness and corpus callosum measurements in male adolescents with autism, their unaffected siblings and controls, and their associations with executive…

  4. Neuroplasticity of language in left-hemisphere stroke: Evidence linking subsecond electrophysiology and structural connections

    NARCIS (Netherlands)

    Piai, V.; Meyer, L.; Dronkers, N.F.; Knight, R.T.

    2017-01-01

    The understanding of neuroplasticity following stroke is predominantly based on neuroimaging measures that cannot address the subsecond neurodynamics of impaired language processing. We combined behavioral and electrophysiological measures and structural-connectivity estimates to characterize

  5. Selective attention to phonology dynamically modulates initial encoding of auditory words within the left hemisphere.

    Science.gov (United States)

    Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D

    2014-08-15

    Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may play in phonological awareness impairments thought to underlie developmental reading disabilities. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Embedded Words in Visual Word Recognition: Does the Left Hemisphere See the Rain in Brain?

    Science.gov (United States)

    McCormick, Samantha F.; Davis, Colin J.; Brysbaert, Marc

    2010-01-01

    To examine whether interhemispheric transfer during foveal word recognition entails a discontinuity between the information presented to the left and right of fixation, we presented target words in such a way that participants fixated immediately left or right of an embedded word (as in "gr*apple", "bull*et") or in the middle…

  7. A supervised framework for lesion segmentation and automated VLSM analyses in left hemispheric stroke

    Directory of Open Access Journals (Sweden)

    Dorian Pustina

    2015-05-01

    Full Text Available INTRODUCTION: Voxel-based lesion-symptom mapping (VLSM is conventionally performed using skill and knowledge of experts to manually delineate brain lesions. This process requires time, and is likely to have substantial inter-rater variability. Here, we propose a supervised machine learning framework for lesion segmentation capable of learning from a single modality and existing manual segmentations in order to delineate lesions in new patients. METHODS: Data from 60 patients with chronic stroke aphasia were utilized in the study (age: 59.7±11.5yrs, post-stroke interval: 5±2.9yrs, male/female ratio: 34/26. Using a single T1 image of each subject, additional features were created that provided complementary information, such as, difference from template, tissue segmentation, brain asymmetries, gradient magnitude, and deviances of these images from 80 age and gender matched controls. These features were fed into MRV-NRF (multi-resolution voxel-wise neighborhood random forest; Tustison et al., 2014 prediction algorithm implemented in ANTsR (Avants, 2015. The algorithm incorporates information from each voxel and its surrounding neighbors from all above features, in a hierarchy of random forest predictions from low to high resolution. The validity of the framework was tested with a 6-fold cross validation (i.e., train from 50 subjects, predict 10. The process was repeated ten times, producing ten segmentations for each subject, from which the average solution was binarized. Predicted lesions were compared to manually defined lesions, and VLSM models were built on 4 language measures: repetition and comprehension subscores from the WAB (Kertesz, 1982, WAB-AQ, and PNT naming accuracy (Roach, Schwartz, Martin, Grewal, & Brecher, 1996. RESULTS: Manual and predicted lesion size showed high correlation (r=0.96. Compared to manual lesions, the predicted lesions had a dice overlap of 0.72 (±0.14 STD, a case-wise maximum distance (Hausdorff of 21mm (±16.4, and area under the ROC curve of 0.86 (±0.09. Lesion size correlated with overlap (r=0.5, p<0.001, but not with maximum displacement (r=-15, p=0.27. VLSM thresholded t-maps (p<0.05, FDR corrected showed a continuous dice overlap of 0.75 for AQ, 0.81 for repetition, 0.57 for comprehension, and 0.58 for naming (Figure 1. To investigate whether the mismatch between manual VLSM and automated VLSM involved critical areas related to cognitive performance, we created behavioral predictions from the VLSM models. Briefly, a prediction value was obtained from each voxel and the weighted average of all voxels was computed (i.e., voxels with high t-value contributed more to the prediction than voxels with low t-value. Manual VLSM showed slightly higher correlation of predicted performance with actual performance compared to automated VLSM (respectively, AQ: 0.65 and 0.60, repetition: 0.62 and 0.57, comprehension: 0.53 and 0.48, naming: 0.46 and 0.41. The difference between the two, however, was not significant (lowest p=0.07. CONCLUSIONS: These findings show that automated lesion segmentation is a viable alternative to manual delineation, producing similar lesion-symptom maps and similar predictions with standard manual segmentations. Given the ability to learn from existing manual delineations, the tool can be implemented in ongoing projects either to fully automatize lesion segmentation, or to provide a preliminary delineation to be rectified by the expert.

  8. Improved Spatial Ability Correlated with Left Hemisphere Dysfunction in Turner's Syndrome. Implications for Mechanism.

    Science.gov (United States)

    Rovet, Joanne F.

    This study contrasts the performance of a 17-year-old female subject with Turner's syndrome before and after developing left temporal lobe seizures, as a means of identifying the mechanism responsible for the Turner's syndrome spatial impairment. The results revealed a deficit in spatial processing before onset of the seizure disorder. Results…

  9. Speech Disorders in Neurofibromatosis Type 1: A Sample Survey

    Science.gov (United States)

    Cosyns, Marjan; Vandeweghe, Lies; Mortier, Geert; Janssens, Sandra; Van Borsel, John

    2010-01-01

    Background: Neurofibromatosis type 1 (NF1) is an autosomal-dominant neurocutaneous disorder with an estimated prevalence of two to three cases per 10 000 population. While the physical characteristics have been well documented, speech disorders have not been fully characterized in NF1 patients. Aims: This study serves as a pilot to identify key…

  10. Speech identity conversion

    Czech Academy of Sciences Publication Activity Database

    Vondra, Martin; Vích, Robert

    Vol. 3445, - (2005), s. 421-426 ISSN 0302-9743. [International Summer School on Neural Nets "E. R. Caianiello". Course: Nonlinear Speech Modeling and Applications /9./. Vietri sul Mare, 13.09.2004-18.09.2004] R&D Projects: GA ČR(CZ) GA102/04/1097; GA ČR(CZ) GA102/02/0124; GA MŠk(CZ) OC 277.001 Institutional research plan: CEZ:AV0Z2067918 Keywords : speech synthesis * computer science Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.402, year: 2005

  11. The chairman's speech

    International Nuclear Information System (INIS)

    Allen, A.M.

    1986-01-01

    The paper contains a transcript of a speech by the chairman of the UKAEA, to mark the publication of the 1985/6 annual report. The topics discussed in the speech include: the Chernobyl accident and its effect on public attitudes to nuclear power, management and disposal of radioactive waste, the operation of UKAEA as a trading fund, and the UKAEA development programmes. The development programmes include work on the following: fast reactor technology, thermal reactors, reactor safety, health and safety aspects of water cooled reactors, the Joint European Torus, and under-lying research. (U.K.)

  12. Designing speech for a recipient

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    is investigated on three candidates for so-called ‘simplified registers’: speech to children (also called motherese or baby talk), speech to foreigners (also called foreigner talk) and speech to robots. The volume integrates research from various disciplines, such as psychology, sociolinguistics...

  13. Speech Communication and Signal Processing

    Indian Academy of Sciences (India)

    Communicating with a machine in a natural mode such as speech brings out not only several technological challenges, but also limitations in our understanding of how people communicate so effortlessly. The key is to understand the distinction between speech processing (as is done in human communication) and speech ...

  14. Searching for world domination

    CERN Multimedia

    Quillen, E

    2004-01-01

    "Optimists might believe Microsoft suffered a setback last week that will impede its progress toward world domination, but I suspect the company has already found a way to prevail. At issue before the European Union was Microsoft's bundling of its Windows Media Player with its operating system" (1 page)

  15. Iron dominated magnets

    International Nuclear Information System (INIS)

    Fischer, G.E.

    1985-07-01

    These two lectures on iron dominated magnets are meant for the student of accelerator science and contain general treatments of the subjects design and construction. The material is arranged in the categories: General Concepts and Cost Considerations, Profile Configuration and Harmonics, Magnetic Measurements, a few examples of ''special magnets'' and Materials and Practices. Extensive literature is provided

  16. Autosomal dominant polycystisk nyresygdom

    DEFF Research Database (Denmark)

    Naver, Signe Vinsand; Ørskov, Bjarne; Jensen, Anja Møller

    2017-01-01

    Autosomal dominant polycystic kidney disease (ADPKD) is the most common genetic disorder which causes end stage renal disease. In Denmark, estimated 5,000 patients are living with the disease. Most of the patients are in regular contact with physicians due to the progression of kidney failure...

  17. Seeing voices: High-density electrical mapping and source-analysis of the multisensory mismatch negativity evoked during the McGurk illusion.

    Science.gov (United States)

    Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J

    2007-02-01

    Seeing a speaker's facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the "McGurk illusion", where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at approximately 290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350-400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process.

  18. Hearing speech in music

    Directory of Open Access Journals (Sweden)

    Seth-Reino Ekström

    2011-01-01

    Full Text Available The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA noise and speech spectrum-filtered noise (SPN]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA. The results showed a significant effect of piano performance speed and octave (P<.01. Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01 and SPN (P<.05. Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01, but there were smaller differences between masking conditions (P<.01. It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  19. Media Criticism Group Speech

    Science.gov (United States)

    Ramsey, E. Michele

    2004-01-01

    Objective: To integrate speaking practice with rhetorical theory. Type of speech: Persuasive. Point value: 100 points (i.e., 30 points based on peer evaluations, 30 points based on individual performance, 40 points based on the group presentation), which is 25% of course grade. Requirements: (a) References: 7-10; (b) Length: 20-30 minutes; (c)…

  20. Expectations and speech intelligibility.

    Science.gov (United States)

    Babel, Molly; Russell, Jamie

    2015-05-01

    Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs.

  1. fMRI activation in the middle frontal gyrus as an indicator of hemispheric dominance for language in brain tumor patients: a comparison with Broca's area.

    Science.gov (United States)

    Dong, Jian W; Brennan, Nicole M Petrovich; Izzo, Giana; Peck, Kyung K; Holodny, Andrei I

    2016-05-01

    Functional MRI (fMRI) can assess language lateralization in brain tumor patients; however, this can be limited if the primary language area-Broca's area (BA)-is affected by the tumor. We hypothesized that the middle frontal gyrus (MFG) can be used as a clinical indicator of hemispheric dominance for language during presurgical workup. Fifty-two right-handed subjects with solitary left-hemispheric primary brain tumors were retrospectively studied. Subjects performed a verbal fluency task during fMRI. The MFG was compared to BA for fMRI voxel activation, language laterality index (LI), and the effect of tumor grade on the LI. Language fMRI (verbal fluency) activated more voxels in MFG than in BA (MFG = 315, BA = 216, p language lateralization than those with low-grade tumors in both BA and MFG (p = 0.02, p = 0.02, respectively). MFG is comparable to BA in its ability to indicate hemispheric dominance for language using a measure of verbal fluency and may be an adjunct measure in the clinical determination of language laterality for presurgical planning.

  2. Visualizing structures of speech expressiveness

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Jensen, Karl Kristoffer; Graugaard, Lars

    2008-01-01

    vowels and consonants, and which converts the speech energy into visual particles that form complex visual structures, provides us with a mean to present the expressiveness of speech into a visual mode. This system is presented in an artwork whose scenario is inspired from the reasons of language......Speech is both beautiful and informative. In this work, a conceptual study of the speech, through investigation of the tower of Babel, the archetypal phonemes, and a study of the reasons of uses of language is undertaken in order to create an artistic work investigating the nature of speech...

  3. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference

    OpenAIRE

    Byeongwook Lee; Kwang-Hyun Cho

    2016-01-01

    Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintai...

  4. Representation of speech variability.

    Science.gov (United States)

    Bent, Tessa; Holt, Rachael F

    2017-07-01

    Speech signals provide both linguistic information (e.g., words and sentences) as well as information about the speaker who produced the message (i.e., social-indexical information). Listeners store highly detailed representations of these speech signals, which are simultaneously indexed with linguistic and social category membership. A variety of methodologies-forced-choice categorization, rating, and free classification-have shed light on listeners' cognitive-perceptual representations of the social-indexical information present in the speech signal. Specifically, listeners can accurately identify some talker characteristics, including native language status, approximate age, sex, and gender. Additionally, listeners have sensitivity to other speaker characteristics-such as sexual orientation, regional dialect, native language for non-native speakers, race, and ethnicity-but listeners tend to be less accurate or more variable at categorizing or rating speakers based on these constructs. However, studies have not necessarily incorporated more recent conceptions of these constructs (e.g., separating listeners' perceptions of race vs ethnicity) or speakers who do not fit squarely into specific categories (e.g., for sex perception, intersex individuals; for gender perception, genderqueer speakers; for race perception, multiracial speakers). Additional research on how the intersections of social-indexical categories influence speech perception is also needed. As the field moves forward, scholars from a variety of disciplines should be incorporated into investigations of how listeners' extract and represent facets of personal identity from speech. Further, the impact of these representations on our interactions with one another in contexts outside of the laboratory should continue to be explored. WIREs Cogn Sci 2017, 8:e1434. doi: 10.1002/wcs.1434 This article is categorized under: Linguistics > Language Acquisition Linguistics > Language in Mind and Brain Psychology

  5. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  6. [Dominant Thalamus and Aphasia].

    Science.gov (United States)

    Nakano, Akiko; Shimomura, Tatsuo

    2015-12-01

    Many studies have shown that lesions of the dominant thalamus precipitate language disorders in a similar manner to transcortical aphasias, in a phenomenon known as "thalamic aphasia." In some cases, however, aphasia may not occur or may appear transiently following thalamic lesions. Furthermore, dominant thalamic lesions can produce changes in character, as observed in patients with amnesic disorder. Previous work has explored the utility of thalamic aphasia as a discriminative feature for classification of aphasia. Although the thalamus may be involved in the function of the brainstem reticular activating system and play a role in attentional network and in memory of Papez circuit or Yakovlev circuit, the mechanism by which thalamic lesion leads to the emergence of aphasic disorders is unclear. In this review, we we survey historical and recent literature on thalamic aphasia in an attempt to understand the neural processes affected by thalamic lesions.

  7. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  8. Conversation, speech acts, and memory.

    Science.gov (United States)

    Holtgraves, Thomas

    2008-03-01

    Speakers frequently have specific intentions that they want others to recognize (Grice, 1957). These specific intentions can be viewed as speech acts (Searle, 1969), and I argue that they play a role in long-term memory for conversation utterances. Five experiments were conducted to examine this idea. Participants in all experiments read scenarios ending with either a target utterance that performed a specific speech act (brag, beg, etc.) or a carefully matched control. Participants were more likely to falsely recall and recognize speech act verbs after having read the speech act version than after having read the control version, and the speech act verbs served as better recall cues for the speech act utterances than for the controls. Experiment 5 documented individual differences in the encoding of speech act verbs. The results suggest that people recognize and retain the actions that people perform with their utterances and that this is one of the organizing principles of conversation memory.

  9. Autosomal dominant cramping disease.

    Science.gov (United States)

    Ricker, K; Moxley, R T

    1990-07-01

    A family was studied in which four generations (16 of 41 members) suffered from painful recurrent muscle cramping. A clear pattern of autosomal dominant inheritance was noted. The cramping first developed during adolescence or early adulthood. Electromyographic analysis indicated a neurogenic origin. The cramps seemed to be due to dysfunction of the motor neurons. The mechanisms underlying this alteration are unclear and require further investigation.

  10. Dominant optic atrophy

    DEFF Research Database (Denmark)

    Lenaers, Guy; Hamel, Christian; Delettre, Cécile

    2012-01-01

    DEFINITION OF THE DISEASE: Dominant Optic Atrophy (DOA) is a neuro-ophthalmic condition characterized by a bilateral degeneration of the optic nerves, causing insidious visual loss, typically starting during the first decade of life. The disease affects primary the retinal ganglion cells (RGC) an......) and their axons forming the optic nerve, which transfer the visual information from the photoreceptors to the lateral geniculus in the brain....

  11. Public owners will dominate

    International Nuclear Information System (INIS)

    Bakken, Stein Arne

    2003-01-01

    In ten years there will still be a dominating public ownership in the energy supply sector in Norway. Statkraft will be the big actor. Norway will then be integrated in an European power market through more cables and the power price will be lower and more stable. The market will be important, but within frames set by the politicians. This article quotes the views of two central figures in the energy sector on the energy supply industry in 2014

  12. Relationship between speech motor control and speech intelligibility in children with speech sound disorders.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Pukonen, Margit; Goshulak, Debra; Yu, Vickie Y; Kadis, Darren S; Kroll, Robert; Pang, Elizabeth W; De Nil, Luc F

    2013-01-01

    The current study was undertaken to investigate the impact of speech motor issues on the speech intelligibility of children with moderate to severe speech sound disorders (SSD) within the context of the PROMPT intervention approach. The word-level Children's Speech Intelligibility Measure (CSIM), the sentence-level Beginner's Intelligibility Test (BIT) and tests of speech motor control and articulation proficiency were administered to 12 children (3:11 to 6:7 years) before and after PROMPT therapy. PROMPT treatment was provided for 45 min twice a week for 8 weeks. Twenty-four naïve adult listeners aged 22-46 years judged the intelligibility of the words and sentences. For CSIM, each time a recorded word was played to the listeners they were asked to look at a list of 12 words (multiple-choice format) and circle the word while for BIT sentences, the listeners were asked to write down everything they heard. Words correctly circled (CSIM) or transcribed (BIT) were averaged across three naïve judges to calculate percentage speech intelligibility. Speech intelligibility at both the word and sentence level was significantly correlated with speech motor control, but not articulatory proficiency. Further, the severity of speech motor planning and sequencing issues may potentially be a limiting factor in connected speech intelligibility and highlights the need to target these issues early and directly in treatment. The reader will be able to: (1) outline the advantages and disadvantages of using word- and sentence-level speech intelligibility tests; (2) describe the impact of speech motor control and articulatory proficiency on speech intelligibility; and (3) describe how speech motor control and speech intelligibility data may provide critical information to aid treatment planning. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Acoustical conditions for speech communication in active elementary school classrooms

    Science.gov (United States)

    Sato, Hiroshi; Bradley, John

    2005-04-01

    Detailed acoustical measurements were made in 34 active elementary school classrooms with typical rectangular room shape in schools near Ottawa, Canada. There was an average of 21 students in classrooms. The measurements were made to obtain accurate indications of the acoustical quality of conditions for speech communication during actual teaching activities. Mean speech and noise levels were determined from the distribution of recorded sound levels and the average speech-to-noise ratio was 11 dBA. Measured mid-frequency reverberation times (RT) during the same occupied conditions varied from 0.3 to 0.6 s, and were a little less than for the unoccupied rooms. RT values were not related to noise levels. Octave band speech and noise levels, useful-to-detrimental ratios, and Speech Transmission Index values were also determined. Key results included: (1) The average vocal effort of teachers corresponded to louder than Pearsons Raised voice level; (2) teachers increase their voice level to overcome ambient noise; (3) effective speech levels can be enhanced by up to 5 dB by early reflection energy; and (4) student activity is seen to be the dominant noise source, increasing average noise levels by up to 10 dBA during teaching activities. [Work supported by CLLRnet.

  14. Autosomal Dominant Polycystic Kidney Disease

    Science.gov (United States)

    ... RePORT NIH Fact Sheets Home > Autosomal Dominant Polycystic Kidney Disease Small Text Medium Text Large Text Autosomal Dominant Polycystic Kidney Disease YESTERDAY Autosomal Dominant Polycystic Kidney Disease (ADPKD) resulted ...

  15. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  16. Dominating biological networks.

    Directory of Open Access Journals (Sweden)

    Tijana Milenković

    Full Text Available Proteins are essential macromolecules of life that carry out most cellular processes. Since proteins aggregate to perform function, and since protein-protein interaction (PPI networks model these aggregations, one would expect to uncover new biology from PPI network topology. Hence, using PPI networks to predict protein function and role of protein pathways in disease has received attention. A debate remains open about whether network properties of "biologically central (BC" genes (i.e., their protein products, such as those involved in aging, cancer, infectious diseases, or signaling and drug-targeted pathways, exhibit some topological centrality compared to the rest of the proteins in the human PPI network.To help resolve this debate, we design new network-based approaches and apply them to get new insight into biological function and disease. We hypothesize that BC genes have a topologically central (TC role in the human PPI network. We propose two different concepts of topological centrality. We design a new centrality measure to capture complex wirings of proteins in the network that identifies as TC those proteins that reside in dense extended network neighborhoods. Also, we use the notion of domination and find dominating sets (DSs in the PPI network, i.e., sets of proteins such that every protein is either in the DS or is a neighbor of the DS. Clearly, a DS has a TC role, as it enables efficient communication between different network parts. We find statistically significant enrichment in BC genes of TC nodes and outperform the existing methods indicating that genes involved in key biological processes occupy topologically complex and dense regions of the network and correspond to its "spine" that connects all other network parts and can thus pass cellular signals efficiently throughout the network. To our knowledge, this is the first study that explores domination in the context of PPI networks.

  17. Predicting automatic speech recognition performance over communication channels from instrumental speech quality and intelligibility scores

    NARCIS (Netherlands)

    Gallardo, L.F.; Möller, S.; Beerends, J.

    2017-01-01

    The performance of automatic speech recognition based on coded-decoded speech heavily depends on the quality of the transmitted signals, determined by channel impairments. This paper examines relationships between speech recognition performance and measurements of speech quality and intelligibility

  18. Sensorimotor Interactions in Speech Learning

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    2011-10-01

    Full Text Available Auditory input is essential for normal speech development and plays a key role in speech production throughout the life span. In traditional models, auditory input plays two critical roles: 1 establishing the acoustic correlates of speech sounds that serve, in part, as the targets of speech production, and 2 as a source of feedback about a talker's own speech outcomes. This talk will focus on both of these roles, describing a series of studies that examine the capacity of children and adults to adapt to real-time manipulations of auditory feedback during speech production. In one study, we examined sensory and motor adaptation to a manipulation of auditory feedback during production of the fricative “s”. In contrast to prior accounts, adaptive changes were observed not only in speech motor output but also in subjects' perception of the sound. In a second study, speech adaptation was examined following a period of auditory–perceptual training targeting the perception of vowels. The perceptual training was found to systematically improve subjects' motor adaptation response to altered auditory feedback during speech production. The results of both studies support the idea that perceptual and motor processes are tightly coupled in speech production learning, and that the degree and nature of this coupling may change with development.

  19. Speech is Golden

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter

    2014-01-01

    Most of the Danish municipalities are ready to begin to adopt automatic speech recognition, but at the same time remain nervous following a long series of bad business cases in the recent past. Complaints are voiced over costly licences and low service levels, typical effects of a de facto monopoly...... on the supply side. The present article reports on a new public action strategy which has taken shape in the course of 2013-14. While Denmark is a small language area, our public sector is well organised and has considerable purchasing power. Across this past year, Danish local authorities have organised around...... the speech technology challenge, they have formulated a number of joint questions and new requirements to be met by suppliers and have deliberately worked towards formulating tendering material which will allow fair competition. Public researchers have contributed to this work, including the author...

  20. Speech is Golden

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter

    2014-01-01

    on the supply side. The present article reports on a new public action strategy which has taken shape in the course of 2013-14. While Denmark is a small language area, our public sector is well organised and has considerable purchasing power. Across this past year, Danish local authorities have organised around...... the speech technology challenge, they have formulated a number of joint questions and new requirements to be met by suppliers and have deliberately worked towards formulating tendering material which will allow fair competition. Public researchers have contributed to this work, including the author...... of the present article, in the role of economically neutral advisers. The aim of the initiative is to pave the way for the first profitable contract in the field - which we hope to see in 2014 - an event which would precisely break the present deadlock and open up a billion EUR market for speech technology...

  1. The logic of indirect speech

    Science.gov (United States)

    Pinker, Steven; Nowak, Martin A.; Lee, James J.

    2008-01-01

    When people speak, they often insinuate their intent indirectly rather than stating it as a bald proposition. Examples include sexual come-ons, veiled threats, polite requests, and concealed bribes. We propose a three-part theory of indirect speech, based on the idea that human communication involves a mixture of cooperation and conflict. First, indirect requests allow for plausible deniability, in which a cooperative listener can accept the request, but an uncooperative one cannot react adversarially to it. This intuition is supported by a game-theoretic model that predicts the costs and benefits to a speaker of direct and indirect requests. Second, language has two functions: to convey information and to negotiate the type of relationship holding between speaker and hearer (in particular, dominance, communality, or reciprocity). The emotional costs of a mismatch in the assumed relationship type can create a need for plausible deniability and, thereby, select for indirectness even when there are no tangible costs. Third, people perceive language as a digital medium, which allows a sentence to generate common knowledge, to propagate a message with high fidelity, and to serve as a reference point in coordination games. This feature makes an indirect request qualitatively different from a direct one even when the speaker and listener can infer each other's intentions with high confidence. PMID:18199841

  2. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  3. Neurophysiology of Speech Differences in Childhood Apraxia of Speech

    Science.gov (United States)

    Preston, Jonathan L.; Molfese, Peter J.; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes. PMID:25090016

  4. IBM MASTOR SYSTEM: Multilingual Automatic Speech-to-speech Translator

    National Research Council Canada - National Science Library

    Gao, Yuqing; Gu, Liang; Zhou, Bowen; Sarikaya, Ruhi; Afify, Mohamed; Kuo, Hong-Kwang; Zhu, Wei-zhong; Deng, Yonggang; Prosser, Charles; Zhang, Wei

    2006-01-01

    .... Challenges include speech recognition and machine translation in adverse environments, lack of training data and linguistic resources for under-studied languages, and the need to rapidly develop...

  5. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  6. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  7. On dominator colorings in graphs

    Indian Academy of Sciences (India)

    A dominator coloring of a graph is a proper coloring of in which every vertex dominates every vertex of at least one color class. The minimum number of colors required for a dominator coloring of is called the dominator chromatic number of and is denoted by d ( G ) . In this paper we present several results on ...

  8. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  9. Rehabilitation of Oronasal Speech Disorders

    Directory of Open Access Journals (Sweden)

    Hashem Shemshadi

    2006-09-01

    Full Text Available Oronasal region, as an important organ of taste and smell, being respected for its impact on the resonace, which is crucial for any normal speech production. Different congenital, acquired and/or developmentalpdefect, may not only have impacts on the quality of respiration, phonation, resonance, also on the process of a normal speech. This article will enable readers to have more focus in such important neuroanatomical speech zones disorders and their respective proper rehabilitation methods in different derangements. Among all other defects, oronasal malfunctionings would definitely has an influence on the oronasal sound resonance and furtherly render impairments on a normal speech production. Rehabilitative approach by speech and language pathologist is highly recommended to alleviate most of oronasal speech disorders.

  10. The relationship between working memory and apraxia of speech A interrelação entre memória operacional e apraxia de fala

    Directory of Open Access Journals (Sweden)

    Fernanda Chapchap Martins

    2009-09-01

    Full Text Available The present study aimed to verify the relationship between working memory (WM and apraxia of speech and explored which WM components were involved in the motor planning of speech. A total of 22 patients and 22 healthy adults were studied. These patients were selected according to the following inclusion criteria: a single brain lesion in the left hemisphere, presence of apraxia of speech and sufficient oral comprehension. This study involved assessment of apraxia of speech and evaluation of working memory capacity. The performance of apraxic patients was significantly poorer than that of controls, where this reached statistical significance. The study concluded that participants with apraxia of speech presented a working memory deficit and that this was probably related to the articulatory process of the phonoarticulatory loop. Furthermore, all apraxic patients presented a compromise in working memory.O objetivo do presente estudo foi verificar a interrelação entre memória operacional e apraxia verbal e explorar quais os componentes desta memória estariam envolvidos na programação motora da fala. Foram avaliados 22 pacientes apráxicos e 22 controles. Todos os participantes foram submetidos a avaliação da apraxia de fala. Para investigar a memória operacional, foram aplicados o teste de span de dígitos na ordem direta e inversa, um teste de repetição de palavras longas e curtas e o Rey Auditory Verbal Learning Test, que investiga, além da alça articulatória, o buffer episódico. O desempenho dos apráxicos em todos os testes de memória foi estatisticamente significante mais baixo que o desempenho dos controles. Concluímos que indivíduos com apraxia apresentam um déficit na memória operacional e que este déficit está mais relacionado ao processo articulatório da alça fonoarticulatória.

  11. Increased activation of the hippocampus during a Chinese character subvocalization task in adults with cleft lip and palate palatoplasty and speech therapy.

    Science.gov (United States)

    Zhang, Wenjing; Li, Chunlin; Chen, Long; Xing, Xiyue; Li, Xiangyang; Yang, Zhi; Zhang, Haiyan; Chen, Renji

    2017-08-16

    This study aimed to explore brain activation in patients with cleft lip and palate (CLP) using a Chinese character subvocalization task, in which the stimuli were selected from a clinical articulation evaluation test. CLP is a congenital disability. Individuals with CLP usually have articulation disorder caused by abnormal lip and palate structure. Previous studies showed that primary somatosensory and motor areas had a significant difference in activation in patients with CLP. However, whether brain activation was restored to a normal level after palatoplasty and speech rehabilitation is not clear. Two groups, adults after palatoplasty with speech training and age-matched and sex-matched controls, participated in this study. Brain activation during Chinese character subvocalization task and behavioral data were recorded using functional MRI. Patients with CLP responded to the target significantly more slowly compared with the controls, whereas no significant difference in accuracy was found between the groups. Brain activation had similar patterns between groups. Broca's area, Wernicke's area, motor areas, somatosensory areas, and insula in both hemispheres, and the dorsolateral prefrontal cortex and the ventrolateral prefrontal cortex in the right hemisphere were activated in both groups, with no statistically significant difference. Furthermore, the two-sample t-test showed that the hippocampus in the left hemisphere was activated significantly in patients with CLP compared with the controls. The results suggested that the hippocampus might be involved in the language-related neural circuit in patients with CLP and play a role of pronunciation retrieval to help patients with CLP to complete the pronunciation effectively.

  12. Abortion and compelled physician speech.

    Science.gov (United States)

    Orentlicher, David

    2015-01-01

    Informed consent mandates for abortion providers may infringe the First Amendment's freedom of speech. On the other hand, they may reinforce the physician's duty to obtain informed consent. Courts can promote both doctrines by ensuring that compelled physician speech pertains to medical facts about abortion rather than abortion ideology and that compelled speech is truthful and not misleading. © 2015 American Society of Law, Medicine & Ethics, Inc.

  13. Speech Recognition on Mobile Devices

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    The enthusiasm of deploying automatic speech recognition (ASR) on mobile devices is driven both by remarkable advances in ASR technology and by the demand for efficient user interfaces on such devices as mobile phones and personal digital assistants (PDAs). This chapter presents an overview of ASR...... in the mobile context covering motivations, challenges, fundamental techniques and applications. Three ASR architectures are introduced: embedded speech recognition, distributed speech recognition and network speech recognition. Their pros and cons and implementation issues are discussed. Applications within...... command and control, text entry and search are presented with an emphasis on mobile text entry....

  14. Psychotic speech: a neurolinguistic perspective.

    Science.gov (United States)

    Anand, A; Wales, R J

    1994-06-01

    The existence of an aphasia-like language disorder in psychotic speech has been the subject of much debate. This paper argues that a discrete language disorder could be an important cause of the disturbance seen in psychotic speech. A review is presented of classical clinical descriptions and experimental studies that have explored the similarities between psychotic language impairment and aphasic speech. The paper proposes neurolinguistic tasks which may be used in future studies to elicit subtle language impairments in psychotic speech. The usefulness of a neurolinguistic model for further research in the aetiology and treatment of psychosis is discussed.

  15. Phonetic Consequences of Speech Disfluency

    National Research Council Canada - National Science Library

    Shriberg, Elizabeth E

    1999-01-01

    .... Analyses of American English show that disfluency affects a variety of phonetic aspects of speech, including segment durations, intonation, voice quality, vowel quality, and coarticulation patterns...

  16. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    2016-08-26

    ; speech-to-speech translation; language identification. ... interest owing to two strong driving forces. Firstly, technical advances in speech recognition and synthesis are posing new challenges and opportunities to researchers.

  17. Speech Recognition: How Do We Teach It?

    Science.gov (United States)

    Barksdale, Karl

    2002-01-01

    States that growing use of speech recognition software has made voice writing an essential computer skill. Describes how to present the topic, develop basic speech recognition skills, and teach speech recognition outlining, writing, proofreading, and editing. (Contains 14 references.) (SK)

  18. Speech and Language Problems in Children

    Science.gov (United States)

    Children vary in their development of speech and language skills. Health care professionals have lists of milestones ... it may be due to a speech or language disorder. Children who have speech disorders may have ...

  19. An optimal speech processor for efficient human speech ...

    Indian Academy of Sciences (India)

    above, the speech signal is recorded at 21739 Hz for English subjects and 20000 Hz for. Cantonese and Georgian subjects. We downsampled the speech signals to 16 kHz for our anal- ysis. Using these parallel acoustic and articulatory data from Cantonese and Georgian, we will be able to examine our communication ...

  20. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Directory of Open Access Journals (Sweden)

    Sid-Ahmed Selouani

    2009-01-01

    Full Text Available Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  1. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  2. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  3. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  4. Decreased language laterality in tuberous sclerosis complex: a relationship between language dominance and tuber location as well as history of epilepsy.

    Science.gov (United States)

    Gallagher, Anne; Tanaka, Naoaki; Suzuki, Nao; Liu, Hesheng; Thiele, Elizabeth A; Stufflebeam, Steven M

    2012-09-01

    Nearly 90% of patients with tuberous sclerosis complex (TSC) have epilepsy. Epilepsy surgery can be considered, which often requires a presurgical assessment of language lateralization. This is the first study to investigate language lateralization in TSC patients using magnetoencephalography. Fifteen patients performed a language task during magnetoencephalography recording. Cerebral generators of language-evoked fields (EF) were identified in each patient. Laterality indices (LI) were computed using magnetoencephalography data extracted from the inferior frontal as well as middle and superior temporal gyri from both hemispheres between 250 and 550 ms. Source analysis demonstrated a fusiform gyrus activation, followed by an activation located in the basal temporal language area and middle and superior temporal gyri responses, ending with an inferior frontal activation. Eleven patients (73.3%) had left-hemisphere language dominance, whereas four patients (26.7%) showed a bilateral language pattern, which was associated with a history of epilepsy and presence of tubers in language-related areas. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. MRI language dominance assessment in epilepsy patients at 1.0 T: region of interest analysis and comparison with intracarotid amytal testing

    Energy Technology Data Exchange (ETDEWEB)

    Deblaere, K.; Vandemaele, P.; Tieleman, A.; Achten, E. [Department of Neuroradiology, Ghent University Hospital, De Pintelaan 185, 9000, Ghent (Belgium); Boon, P.A.; Vonck, K. [Reference Center for Refractory Epilepsy of the Department of Neurology, Ghent University Hospital, Ghent (Belgium); Vingerhoets, G. [Labaratory for Neuropsychology, Neurology Section of the Department of Internal Medicine, Ghent University, Ghent (Belgium); Backes, W. [Department of Neuroradiology, University Hospital Maastricht, Maastricht (Netherlands); Defreyne, L. [Department of Interventional Radiology, Ghent University Hospital, Ghent (Belgium)

    2004-06-01

    The primary goal of this study was to test the reliability of presurgical language lateralization in epilepsy patients with functional magnetic resonance imaging (fMRI) with a 1.0-T MR scanner using a simple word generation paradigm and conventional equipment. In addition, hemispherical fMRI language lateralization analysis and region of interest (ROI) analysis in the frontal and temporo-parietal regions were compared with the intracarotid amytal test (IAT). Twenty epilepsy patients under presurgical evaluation were prospectively examined by both fMRI and IAT. The fMRI experiment consisted of a word chain task (WCT) using the conventional headphone set and a sparse sequence. In 17 of the 20 patients, data were available for comparison between the two procedures. Fifteen of these 17 patients were categorized as left hemispheric dominant, and 2 patients demonstrated bilateral language representation by both fMRI and IAT. The highest reliability for lateralization was obtained using frontal ROI analysis. Hemispherical analysis was less powerful and reliable in all cases but one, while temporo-parietal ROI analysis was unreliable as a stand-alone analysis when compared with IAT. The effect of statistical threshold on language lateralization prompted for the use of t-value-dependent lateralization index plots. This study illustrates that fMRI-determined language lateralization can be performed reliably in a clinical MR setting operating at a low field strength of 1 T without expensive stimulus presentation systems. (orig.)

  6. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ...) has used its existing technology in phonetic speech recognition, audio signal processing, and multilingual language translation to design and demonstrate an advanced audio interface for speech...

  7. Teaching Speech Acts

    Directory of Open Access Journals (Sweden)

    Teaching Speech Acts

    2007-01-01

    Full Text Available In this paper I argue that pragmatic ability must become part of what we teach in the classroom if we are to realize the goals of communicative competence for our students. I review the research on pragmatics, especially those articles that point to the effectiveness of teaching pragmatics in an explicit manner, and those that posit methods for teaching. I also note two areas of scholarship that address classroom needs—the use of authentic data and appropriate assessment tools. The essay concludes with a summary of my own experience teaching speech acts in an advanced-level Portuguese class.

  8. Dissociated Crossed Speech Areas in a Tumour Patient

    Directory of Open Access Journals (Sweden)

    Jörg Mauler

    2017-05-01

    Full Text Available In the past, the eloquent areas could be deliberately localised by the invasive Wada test. The very rare cases of dissociated crossed speech areas were accidentally found based on the clinical symptomatology. Today functional magnetic resonance imaging (fMRI-based imaging can be employed to non-invasively localise the eloquent areas in brain tumour patients for therapy planning. A 41-year-old, left-handed man with a low-grade glioma in the left frontal operculum extending to the insular cortex, tension headaches, and anomic aphasia over 5 months underwent a pre-operative speech area localisation fMRI measurement, which revealed the evidence of the transhemispheric disposition, where the dominant Wernicke speech area is located on the left and the Broca’s area is strongly lateralised to the right hemisphere. The outcome of the Wada test and the intraoperative cortico-subcortical stimulation mapping were congruent with this finding. After tumour removal, language area function was fully preserved. Upon the occurrence of brain tumours with a risk of impaired speech function, the rare dissociate crossed speech areas disposition may gain a clinically relevant meaning by allowing for more extended tumour removal. Hence, for its identification, diagnostics which take into account both brain hemispheres, such as fMRI, are recommended.

  9. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... Some of the history of gradual infusion of the modulation spectrum concept into Automatic recognition of speech (ASR) comes next, pointing to the relationship of modulation spectrum processing to wellaccepted ASR techniques such as dynamic speech features or RelAtive SpecTrAl (RASTA) filtering. Next ...

  10. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  11. Methods of Teaching Speech Recognition

    Science.gov (United States)

    Rader, Martha H.; Bailey, Glenn A.

    2010-01-01

    Objective: This article introduces the history and development of speech recognition, addresses its role in the business curriculum, outlines related national and state standards, describes instructional strategies, and discusses the assessment of student achievement in speech recognition classes. Methods: Research methods included a synthesis of…

  12. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Some of the history of gradual infusion of the modulation spectrum concept into Automatic recognition of speech (ASR) comes next, pointing to the relationship of modulation spectrum processing to wellaccepted ASR techniques such as dynamic speech features or RelAtive SpecTrAl (RASTA) filtering. Next, the frequency ...

  13. Indirect speech acts in English

    OpenAIRE

    Василина, В. Н.

    2013-01-01

    The article deals with indirect speech acts in Englishspeaking discourse. Different approaches to their analysis and the reasons for their use are discussed. It is argued that the choice of the form of speech actsdepends on the parameters of communicative partners.

  14. Speech Prosody in Cerebellar Ataxia

    Science.gov (United States)

    Casper, Maureen A.; Raphael, Lawrence J.; Harris, Katherine S.; Geibel, Jennifer M.

    2007-01-01

    Persons with cerebellar ataxia exhibit changes in physical coordination and speech and voice production. Previously, these alterations of speech and voice production were described primarily via perceptual coordinates. In this study, the spatial-temporal properties of syllable production were examined in 12 speakers, six of whom were healthy…

  15. Perceptual Learning of Interrupted Speech

    NARCIS (Netherlands)

    Benard, Michel Ruben; Başkent, Deniz

    2013-01-01

    The intelligibility of periodically interrupted speech improves once the silent gaps are filled with noise bursts. This improvement has been attributed to phonemic restoration, a top-down repair mechanism that helps intelligibility of degraded speech in daily life. Two hypotheses were investigated

  16. An Analysis of Maxims in Susilo Bambang Yudhoyono’s Political Speeches

    OpenAIRE

    Pasaribu, Mestika

    2015-01-01

    Thesis entitled " An Analysis of Maxims in Susilo Bambang Yudhoyono 's Political Speeches " is an analysis of the application of the maxim in his speech Yudhoyono . Theoretical basis used is the theory Geoffrey Leech (1983 ) which describes the level of politeness with the use of the maxim . This research uses descriptive qualitative method . This study aims to determine the types of maxims used by Susilo Bambang Yudhoyono and the most dominant maxim used Susilo Bambang Yudhoyono in his polit...

  17. Training changes processing of speech cues in older adults with hearing loss

    OpenAIRE

    Anderson, Samira; White-Schwoch, Travis; Choi, Hee Jae; Kraus, Nina

    2013-01-01

    Aging results in a loss of sensory function, and the effects of hearing impairment can be especially devastating due to reduced communication ability. Older adults with hearing loss report that speech, especially in noisy backgrounds, is uncomfortably loud yet unclear. Hearing loss results in an unbalanced neural representation of speech: the slowly-varying envelope is enhanced, dominating representation in the auditory pathway and perceptual salience at the cost of the rapidly-varying fine s...

  18. On dominator colorings in graphs

    Indian Academy of Sciences (India)

    A dominator coloring of a graph G is a proper coloring of G in which every vertex dominates every vertex of at least one color class. The minimum number of colors required for a dominator coloring of G is called the dominator chromatic number of G and is denoted by χd(G). In this paper we present several results on graphs ...

  19. The Disfluent Speech of Bilingual Spanish–English Children: Considerations for Differential Diagnosis of Stuttering

    Science.gov (United States)

    Bedore, Lisa M.; Ramos, Daniel

    2015-01-01

    Purpose The primary purpose of this study was to describe the frequency and types of speech disfluencies that are produced by bilingual Spanish–English (SE) speaking children who do not stutter. The secondary purpose was to determine whether their disfluent speech is mediated by language dominance and/or language produced. Method Spanish and English narratives (a retell and a tell in each language) were elicited and analyzed relative to the frequency and types of speech disfluencies produced. These data were compared with the monolingual English-speaking guidelines for differential diagnosis of stuttering. Results The mean frequency of stuttering-like speech behaviors in the bilingual SE participants ranged from 3% to 22%, exceeding the monolingual English standard of 3 per 100 words. There was no significant frequency difference in stuttering-like or non-stuttering-like speech disfluency produced relative to the child's language dominance. There was a significant difference relative to the language the child was speaking; all children produced significantly more stuttering-like speech disfluencies in Spanish than in English. Conclusion Results demonstrate that the disfluent speech of bilingual SE children should be carefully considered relative to the complex nature of bilingualism. PMID:25215876

  20. Cholinergic Potentiation and Audiovisual Repetition-Imitation Therapy Improve Speech Production and Communication Deficits in a Person with Crossed Aphasia by Inducing Structural Plasticity in White Matter Tracts

    Directory of Open Access Journals (Sweden)

    Marcelo L. Berthier

    2017-06-01

    Full Text Available Donepezil (DP, a cognitive-enhancing drug targeting the cholinergic system, combined with massed sentence repetition training augmented and speeded up recovery of speech production deficits in patients with chronic conduction aphasia and extensive left hemisphere infarctions (Berthier et al., 2014. Nevertheless, a still unsettled question is whether such improvements correlate with restorative structural changes in gray matter and white matter pathways mediating speech production. In the present study, we used pharmacological magnetic resonance imaging to study treatment-induced brain changes in gray matter and white matter tracts in a right-handed male with chronic conduction aphasia and a right subcortical lesion (crossed aphasia. A single-patient, open-label multiple-baseline design incorporating two different treatments and two post-treatment evaluations was used. The patient received an initial dose of DP (5 mg/day which was maintained during 4 weeks and then titrated up to 10 mg/day and administered alone (without aphasia therapy during 8 weeks (Endpoint 1. Thereafter, the drug was combined with an audiovisual repetition-imitation therapy (Look-Listen-Repeat, LLR during 3 months (Endpoint 2. Language evaluations, diffusion weighted imaging (DWI, and voxel-based morphometry (VBM were performed at baseline and at both endpoints in JAM and once in 21 healthy control males. Treatment with DP alone and combined with LLR therapy induced marked improvement in aphasia and communication deficits as well as in selected measures of connected speech production, and phrase repetition. The obtained gains in speech production remained well-above baseline scores even 4 months after ending combined therapy. Longitudinal DWI showed structural plasticity in the right frontal aslant tract and direct segment of the arcuate fasciculus with both interventions. VBM revealed no structural changes in other white matter tracts nor in cortical areas linked by these

  1. Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural, Narrative Speech.

    Science.gov (United States)

    Broderick, Michael P; Anderson, Andrew J; Di Liberto, Giovanni M; Crosse, Michael J; Lalor, Edmund C

    2018-03-05

    People routinely hear and understand speech at rates of 120-200 words per minute [1, 2]. Thus, speech comprehension must involve rapid, online neural mechanisms that process words' meanings in an approximately time-locked fashion. However, electrophysiological evidence for such time-locked processing has been lacking for continuous speech. Although valuable insights into semantic processing have been provided by the "N400 component" of the event-related potential [3-6], this literature has been dominated by paradigms using incongruous words within specially constructed sentences, with less emphasis on natural, narrative speech comprehension. Building on the discovery that cortical activity "tracks" the dynamics of running speech [7-9] and psycholinguistic work demonstrating [10-12] and modeling [13-15] how context impacts on word processing, we describe a new approach for deriving an electrophysiological correlate of natural speech comprehension. We used a computational model [16] to quantify the meaning carried by words based on how semantically dissimilar they were to their preceding context and then regressed this measure against electroencephalographic (EEG) data recorded from subjects as they listened to narrative speech. This produced a prominent negativity at a time lag of 200-600 ms on centro-parietal EEG channels, characteristics common to the N400. Applying this approach to EEG datasets involving time-reversed speech, cocktail party attention, and audiovisual speech-in-noise demonstrated that this response was very sensitive to whether or not subjects understood the speech they heard. These findings demonstrate that, when successfully comprehending natural speech, the human brain responds to the contextual semantic content of each word in a relatively time-locked fashion. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Perfect secure domination in graphs

    Directory of Open Access Journals (Sweden)

    S.V. Divya Rashmi

    2017-07-01

    Full Text Available Let $G=(V,E$ be a graph. A subset $S$ of $V$ is a dominating set of $G$ if every vertex in $Vsetminus  S$ is adjacent to a vertex in $S.$ A dominating set $S$ is called a secure dominating set if for each $vin Vsetminus S$ there exists $uin S$ such that $v$ is adjacent to $u$ and $S_1=(Ssetminus{u}cup {v}$ is a dominating set. If further the vertex $uin S$ is unique, then $S$ is called a perfect secure dominating set. The minimum cardinality of a perfect secure dominating set of $G$ is called the perfect  secure domination number of $G$ and is denoted by $gamma_{ps}(G.$ In this paper we initiate a study of this parameter and present several basic results.

  3. Sensory preference in speech production revealed by simultaneous alteration of auditory and somatosensory feedback

    Science.gov (United States)

    Lametti, Daniel R.; Nasir, Sazzad M.; Ostry, David J.

    2012-01-01

    The idea that humans learn and maintain accurate speech by carefully monitoring auditory feedback is widely held. But this view neglects the fact that auditory feedback is highly correlated with somatosensory feedback during speech production. Somatosensory feedback from speech movements could be a primary means by which cortical speech areas monitor the accuracy of produced speech. We tested this idea by placing the somatosensory and auditory systems in competition during speech motor learning. To do this, we combined two speech learning paradigms to simultaneously alter somatosensory and auditory feedback in real-time as subjects spoke. Somatosensory feedback was manipulated by using a robotic device that altered the motion path of the jaw. Auditory feedback was manipulated by changing the frequency of the first formant of the vowel sound and playing back the modified utterance to the subject through headphones. The amount of compensation for each perturbation was used as a measure of sensory reliance. All subjects were observed to correct for at least one of the perturbations, but auditory feedback was not dominant. Indeed, some subjects showed a stable preference for either somatosensory or auditory feedback during speech. PMID:22764242

  4. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  5. Total Domination Versus Paired-Domination in Regular Graphs

    Directory of Open Access Journals (Sweden)

    Cyman Joanna

    2018-05-01

    Full Text Available A subset S of vertices of a graph G is a dominating set of G if every vertex not in S has a neighbor in S, while S is a total dominating set of G if every vertex has a neighbor in S. If S is a dominating set with the additional property that the subgraph induced by S contains a perfect matching, then S is a paired-dominating set. The domination number, denoted γ(G, is the minimum cardinality of a dominating set of G, while the minimum cardinalities of a total dominating set and paired-dominating set are the total domination number, γt(G, and the paired-domination number, γpr(G, respectively. For k ≥ 2, let G be a connected k-regular graph. It is known [Schaudt, Total domination versus paired domination, Discuss. Math. Graph Theory 32 (2012 435–447] that γpr(G/γt(G ≤ (2k/(k+1. In the special case when k = 2, we observe that γpr(G/γt(G ≤ 4/3, with equality if and only if G ≅ C5. When k = 3, we show that γpr(G/γt(G ≤ 3/2, with equality if and only if G is the Petersen graph. More generally for k ≥ 2, if G has girth at least 5 and satisfies γpr(G/γt(G = (2k/(k + 1, then we show that G is a diameter-2 Moore graph. As a consequence of this result, we prove that for k ≥ 2 and k ≠ 57, if G has girth at least 5, then γpr(G/γt(G ≤ (2k/(k +1, with equality if and only if k = 2 and G ≅ C5 or k = 3 and G is the Petersen graph.

  6. Coevolution of Human Speech and Trade

    NARCIS (Netherlands)

    Horan, R.D.; Bulte, E.H.; Shogren, J.F.

    2008-01-01

    We propose a paleoeconomic coevolutionary explanation for the origin of speech in modern humans. The coevolutionary process, in which trade facilitates speech and speech facilitates trade, gives rise to multiple stable trajectories. While a `trade-speech¿ equilibrium is not an inevitable outcome for

  7. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  8. The "Checkers" Speech and Televised Political Communication.

    Science.gov (United States)

    Flaningam, Carl

    Richard Nixon's 1952 "Checkers" speech was an innovative use of television for political communication. Like television news itself, the campaign fund crisis behind the speech can be thought of in the same terms as other television melodrama, with the speech serving as its climactic episode. The speech adapted well to television because…

  9. Predicting masking release of lateralized speech

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; MacDonald, Ewen; Dau, Torsten

    2016-01-01

    Locsei et al. (2015) [Speech in Noise Workshop, Copenhagen, 46] measured ˝ speech reception thresholds (SRTs) in anechoic conditions where the target speech and the maskers were lateralized using interaural time delays. The maskers were speech-shaped noise (SSN) and reversed babble with 2, 4, or 8...

  10. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy What's in ... coughing, gagging, and refusing foods. Specialists in Speech-Language Therapy Speech-language pathologists (SLPs), often informally known as ...

  11. Neural and Behavioral Mechanisms of Clear Speech

    Science.gov (United States)

    Luque, Jenna Silver

    2017-01-01

    Clear speech is a speaking style that has been shown to improve intelligibility in adverse listening conditions, for various listener and talker populations. Clear-speech phonetic enhancements include a slowed speech rate, expanded vowel space, and expanded pitch range. Although clear-speech phonetic enhancements have been demonstrated across a…

  12. Does the individual adaption of standardized speech paradigmas for clinical functional Magnetic Resonance Imaging (fMRI) effect the localization of the language-dominant hemisphere and of Broca's and Wernicke's areas; Beeinflusst die individuelle Anpassung standardisierter Sprachparadigmen fuer die klinische funktionelle Magnetresonanztomographie (fMRT) die Lokalisation der sprachdominanten Hemisphaere, des Broca- und des Wernicke-Sprachzentrums?

    Energy Technology Data Exchange (ETDEWEB)

    Konrad, F.; Nennig, E.; Kress, B.; Sartor, K.; Stippich, C. [Abteilung Neuroradiologie, Neurologische Klinik, Universitaetsklinikum Heidelberg (Germany); Ochmann, H. [Neurochirurgische Klinik, Universitaetsklinikum Heidelberg (Germany)

    2005-03-01

    Purpose: Functional magnetic resonance imaging (fMRI) localizes Broca's area (B) and Wernicke's area (W) and the hemisphere dominant for language. In clinical fMRI, adapting the stimulation paradigms to each patient's individual cognitive capacity is crucial for diagnostic success. To interpret clinical fMRI findings correctly, we studied the effect of varying frequency and number of stimuli on functional localization, determination of language dominance and BOLD signals. Materials and Methods: Ten volunteers (VP) were investigated at 1.5 Tesla during visually triggered sentence generation using a standardized block design. In four different measurements, the stimuli were presented to each VP with frequencies of (1/1)s, (1/2)s,(1/3)s and (1/6)s. Results: The functional localizations and the correlations of the measured BOLD signals to the applied hemodynamic reference function (r) were almost independent from frequency and number of the stimuli in both hemispheres, whereas the relative BOLD signal changes ({delta}S) in B and W increased with the stimulation rate, which also changed the lateralization indices. The strongest BOLD activations were achieved with the highest stimulation rate or with the maximum language production task, respectively. Conclusion: The adaptation of language paradigms necessary in clinical fMRI does not alter the functional localizations but changes the BOLD signals and language lateralization which should not be attributed to the underlying brain pathology. (orig.)

  13. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2000-10-19

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  14. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2004-04-20

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  15. Theater, Speech, Light

    Directory of Open Access Journals (Sweden)

    Primož Vitez

    2011-07-01

    Full Text Available This paper considers a medium as a substantial translator: an intermediary between the producers and receivers of a communicational act. A medium is a material support to the spiritual potential of human sources. If the medium is a support to meaning, then the relations between different media can be interpreted as a space for making sense of these meanings, a generator of sense: it means that the interaction of substances creates an intermedial space that conceives of a contextualization of specific meaningful elements in order to combine them into the sense of a communicational intervention. The theater itself is multimedia. A theatrical event is a communicational act based on a combination of several autonomous structures: text, scenography, light design, sound, directing, literary interpretation, speech, and, of course, the one that contains all of these: the actor in a human body. The actor is a physical and symbolic, anatomic, and emblematic figure in the synesthetic theatrical act because he reunites in his body all the essential principles and components of theater itself. The actor is an audio-visual being, made of kinetic energy, speech, and human spirit. The actor’s body, as a source, instrument, and goal of the theater, becomes an intersection of sound and light. However, theater as intermedial art is no intermediate practice; it must be seen as interposing bodies between conceivers and receivers, between authors and auditors. The body is not self-evident; the body in contemporary art forms is being redefined as a privilege. The art needs bodily dimensions to explore the medial qualities of substances: because it is alive, it returns to studying biology. The fact that theater is an archaic art form is also the purest promise of its future.

  16. Speech enhancement theory and practice

    CERN Document Server

    Loizou, Philipos C

    2013-01-01

    With the proliferation of mobile devices and hearing devices, including hearing aids and cochlear implants, there is a growing and pressing need to design algorithms that can improve speech intelligibility without sacrificing quality. Responding to this need, Speech Enhancement: Theory and Practice, Second Edition introduces readers to the basic problems of speech enhancement and the various algorithms proposed to solve these problems. Updated and expanded, this second edition of the bestselling textbook broadens its scope to include evaluation measures and enhancement algorithms aimed at impr

  17. Computational neuroanatomy of speech production.

    Science.gov (United States)

    Hickok, Gregory

    2012-01-05

    Speech production has been studied predominantly from within two traditions, psycholinguistics and motor control. These traditions have rarely interacted, and the resulting chasm between these approaches seems to reflect a level of analysis difference: whereas motor control is concerned with lower-level articulatory control, psycholinguistics focuses on higher-level linguistic processing. However, closer examination of both approaches reveals a substantial convergence of ideas. The goal of this article is to integrate psycholinguistic and motor control approaches to speech production. The result of this synthesis is a neuroanatomically grounded, hierarchical state feedback control model of speech production.

  18. Visual speech influences speech perception immediately but not automatically.

    Science.gov (United States)

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  19. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  20. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Spatial localization of speech segments

    DEFF Research Database (Denmark)

    Karlsen, Brian Lykkegaard

    1999-01-01

    Much is known about human localization of simple stimuli like sinusoids, clicks, broadband noise and narrowband noise in quiet. Less is known about human localization in noise. Even less is known about localization of speech and very few previous studies have reported data from localization...... of speech in noise. This study attempts to answer the question: ``Are there certain features of speech which have an impact on the human ability to determine the spatial location of a speaker in the horizontal plane under adverse noise conditions?''. The study consists of an extensive literature survey...... the task of the experiment. The psychoacoustical experiment used naturally-spoken Danish consonant-vowel combinations as targets presented in diffuse speech-shaped noise at a peak SNR of -10 dB. The subjects were normal hearing persons. The experiment took place in an anechoic chamber where eight...

  2. Comparison of speech and language therapy techniques for speech problems in Parkinson's disease

    OpenAIRE

    Herd, CP; Tomlinson, CL; Deane, KHO; Brady, MC; Smith, CH; Sackley, CM; Clarke, CE

    2012-01-01

    Patients with Parkinson's disease commonly suffer from speech and voice difficulties such as impaired articulation and reduced loudness. Speech and language therapy (SLT) aims to improve the intelligibility of speech with behavioural treatment techniques or instrumental aids.

  3. Censored: Whistleblowers and impossible speech

    OpenAIRE

    Kenny, Kate

    2017-01-01

    What happens to a person who speaks out about corruption in their organization, and finds themselves excluded from their profession? In this article, I argue that whistleblowers experience exclusions because they have engaged in ‘impossible speech’, that is, a speech act considered to be unacceptable or illegitimate. Drawing on Butler’s theories of recognition and censorship, I show how norms of acceptable speech working through recruitment practices, alongside the actions of colleagues, can ...

  4. Identifying Deceptive Speech Across Cultures

    Science.gov (United States)

    2016-06-25

    collection of deceptive and non-deceptive speech recorded from interviews between native speaker of Mandarin and of English instructed to answer...report, such as final, technical, interim, memorandum, master’s thesis, progress , quarterly, research, special, group study, etc. 3. DATES COVERED...non-deceptive speech recorded from interviews between native speaker of Mandarin and of English and are currently completing the use of this data to

  5. Dominance Hierarchies in Young Children

    Science.gov (United States)

    Edelman, Murray S.; Omark, Donald R.

    1973-01-01

    This study uses the ethological approach of seeking species characteristics and phylogenetic continuities in an investigation of human behavior. Among primates a striking consistency is the presence of some form of dominance hierarchy in many species. The present study examines peer group dominance hierarchies as they are perceived by children in…

  6. Dominant Leadership Style in Schools

    Science.gov (United States)

    Rajbhandari, Mani Man Singh

    2006-01-01

    The dominant leadership style is defined by the situation and the kind of organizational environment and climate. This, however, does not sufficiently define the leadership qualities in school organizations. There are other factors which also determine the dominant leadership style, which are the traits and style, teachers commitments, pass out…

  7. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  8. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based on this mo......This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...

  9. Novel Techniques for Dialectal Arabic Speech Recognition

    CERN Document Server

    Elmahdy, Mohamed; Minker, Wolfgang

    2012-01-01

    Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and M...

  10. Neural bases of accented speech perception

    Directory of Open Access Journals (Sweden)

    Patti eAdank

    2015-10-01

    Full Text Available The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Adank, Evans, Stuart-Smith, & Scott, 2009; Floccia, Goslin, Girard, & Konopczynski, 2006. Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented speech, are beginning to be identified. This review will outline neural bases associated with perception of accented speech in the light of current models of speech perception, and compare these data to brain areas associated with processing other speech distortions. We will subsequently evaluate competing models of speech processing with regards to neural processing of accented speech. See Cristia et al. (2012 for an in-depth overview of behavioural aspects of accent processing.

  11. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  12. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  13. HATE BENEATH THE COUNTER SPEECH? A QUALITATIVE CONTENT ANALYSIS OF USER COMMENTS ON YOUTUBE RELATED TO COUNTER SPEECH VIDEOS

    Directory of Open Access Journals (Sweden)

    Julian Ernst

    2017-03-01

    Full Text Available The odds in stumbling over extremist material in the internet are high. Counter speech videos, such as those of the German campaign Begriffswelten Islam (Concepts of Islam; Bundeszentrale für politische Bildung, 2015a published on YouTube, offer alternative perspectives and democratic ideas to counteract extremist content. YouTube users may discuss these videos in the comment sections below the video. Yet, it remains open which topics these users bring up in their comments. Moreover, it is unknown how far user comments in this context may promote hate speech—the very opposite of what counter speeches intent to evoke. By applying a qualitative content analysis on a randomly selected sample of user comments, which appeared beneath the counter speech videos of Concepts of Islam, we found that comments dominated, which dealt with devaluating prejudices and stereotypes towards Muslims and/or Islam. However, we also discovered that users in a large scale discussed the content of the videos. Moreover, we identified user comments, which hint at hateful speech either in comments themselves or the discourse the comments are embedded in. Based on these results, we discuss implications for researchers, practitioners and security agencies.

  14. A Note on Isolate Domination

    OpenAIRE

    Sahul Hamid, Ismail; Balamurugan, S; Navaneethakrishnan, A

    2016-01-01

    A set $S$ of vertices of a graph $G$ such that $\\left\\langle S\\right\\rangle$ has an isolated vertex is called an \\emph{isolate set} of $G$. The minimum and maximum cardinality of a maximal isolate set are called the \\emph{isolate number} $i_0(G)$ and the \\emph{upper isolate number} $I_0(G)$ respectively. An isolate set that is also a dominating set (an irredundant set) is an $\\emph{isolate dominating set} \\ (\\emph{an isolate irredundant set})$. The \\emph{isolate domination number} $\\gamma_0(G...

  15. Domination criticality in product graphs

    Directory of Open Access Journals (Sweden)

    M.R. Chithra

    2015-07-01

    Full Text Available A connected dominating set is an important notion and has many applications in routing and management of networks. Graph products have turned out to be a good model of interconnection networks. This motivated us to study the Cartesian product of graphs G with connected domination number, γc(G=2,3 and characterize such graphs. Also, we characterize the k−γ-vertex (edge critical graphs and k−γc-vertex (edge critical graphs for k=2,3 where γ denotes the domination number of G. We also discuss the vertex criticality in grids.

  16. Dominantly inherited cystoid macular edema.

    Science.gov (United States)

    Fishman, G A; Goldberg, M F; Trautmann, J C

    1979-01-01

    Four patients of Greek ancestry had dominantly inherited cystoid macular edema. Characteristics of this syndrome include the following: an early onset and prolonged course of cystoid changes in the macula, followed by atrophy of the macula in later stages. Some patients also show leakage of fluorescein from the optic disc capillaries, subnormal EOG Lp/Dt ratios, elevated rod dark adaptation thresholds, red-green and blue-yellow color deficiencies, normal ERG findings, hyperopia, peripheral pigmentary retinopathy, and vitreous opacities. Dominantly inherited cystoid macular edema is a distinct genetic trait among the dominantly inherited macular dystrophies.

  17. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  18. Contextual variability during speech-in-speech recognition.

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R

    2014-07-01

    This study examined the influence of background language variation on speech recognition. English listeners performed an English sentence recognition task in either "pure" background conditions in which all trials had either English or Dutch background babble or in mixed background conditions in which the background language varied across trials (i.e., a mix of English and Dutch or one of these background languages mixed with quiet trials). This design allowed the authors to compare performance on identical trials across pure and mixed conditions. The data reveal that speech-in-speech recognition is sensitive to contextual variation in terms of the target-background language (mis)match depending on the relative ease/difficulty of the test trials in relation to the surrounding trials.

  19. Voice Activity Detection. Fundamentals and Speech Recognition System Robustness

    OpenAIRE

    Ramirez, J.; Gorriz, J. M.; Segura, J. C.

    2007-01-01

    This chapter has shown an overview of the main challenges in robust speech detection and a review of the state of the art and applications. VADs are frequently used in a number of applications including speech coding, speech enhancement and speech recognition. A precise VAD extracts a set of discriminative speech features from the noisy speech and formulates the decision in terms of well defined rule. The chapter has summarized three robust VAD methods that yield high speech/non-speech discri...

  20. Social Dominance and Sexual Orientation

    OpenAIRE

    Dickins, Thomas E.; Sergeant, Mark J.T.

    2008-01-01

    Heterosexual males are reported to display higher levels of physical aggression and lower levels of empathy than homosexual males. A characteristic linked to both aggression and empathy is social dominance orientation (SDO). A significant sex difference has been reported for SDO, with heterosexual males scoring higher than heterosexual females. The precise relationship between dominance and aggression is currently contested. Given the association between SDO, aggression and empathy, and the d...

  1. Musical functioning, speech lateralization and the amusias.

    Science.gov (United States)

    Berman, I W

    1981-01-17

    Amusia is a condition in which musical capacity is impaired by organic brain disease. Music is in a sense a language and closely resembles speech, both executively and receptively. For musical functioning, rhythmic sense and sense of sounds are essential. Musical ability resides largely in the right (non-dominant) hemisphere. Tests have been devised for the assessment of musical capabilities by Dorgeuille, Grison and Wertheim. Classification of amusia includes vocal amusia, instrumental amnesia, musical agraphia, musical amnesia, disorders of rhythm, and receptive amusia. Amusia like aphasia has clinical significance, and the two show remarkable similarities and often co-exist. Usually executive amusia occurs with executive aphasia and receptive amusia with receptive aphasia, but amusias can exist without aphasia. Severely executive aphasics can sometimes sing with text (words), and this ability is used in the treatment of aphasia. As with aphasia, there is correlation between type of amusia and site of lesion. Thus in executive amusia, the lesion generally occurs in the frontal lobe. In receptive amusia, the lesion is mainly in the temporal lobe. If aphasia is also present the lesion will be in the left (dominant) hemisphere.

  2. A note on isolate domination

    Directory of Open Access Journals (Sweden)

    Ismail Sahul Hamid

    2016-04-01

    Full Text Available A set $S$ of vertices of a graph $G$ such that $\\left\\langle S\\right\\rangle$ has an isolated vertex is called an \\emph{isolate set} of $G$. The minimum and maximum cardinality of a maximal isolate set are called the \\emph{isolate number} $i_0(G$ and the \\emph{upper isolate number} $I_0(G$ respectively. An isolate set that is also a dominating set (an irredundant set is an $\\emph{isolate dominating set} \\ (\\emph{an isolate irredundant set}$. The \\emph{isolate domination number} $\\gamma_0(G$ and the \\emph{upper isolate domination number} $\\Gamma_0(G$ are respectively the minimum and maximum cardinality of a minimal isolate dominating set while the \\emph{isolate irredundance number} $ir_0(G$ and the \\emph{upper isolate irredundance number} $IR_0(G$ are the minimum and maximum cardinality of a maximal isolate irredundant set of $G$. The notion of isolate domination was introduced in \\cite{sb} and the remaining were introduced in \\cite{isrn}. This paper further extends a study of these parameters.   

  3. Mandibular movement during speech of two related Latin languages.

    Science.gov (United States)

    Fontoira-Surís, Marilí; Lago, Laura; Da Silva, Luís; Santana-Mora, Urbano; Santana-Penín, Urbano; Mora, Maria J

    2016-01-01

    This study assessed the kinesiographic recordings of jaw movements during reading a text in Galician and Spanish language. Cross-sectional blind study. A homogeneous healthy group of 25 normal stomatognathic system and native Galician participants was studied. Frontal and parasagittal plane recordings of the intraborder lateral jaw movements and during reading Galician and Spanish texts were recorded using a calibrated jaw-tracking device, kinesiograph. Although movements were similar in both languages, a greater retrusion of the jaw in the Spanish language was shown; moreover, a tendency exists for a left-side motion envelope in this right-handedness preference sample. This study supports the hypothesis that speech is controlled by the central nervous system rather than by peripheral factors and that the hemispheric dominance influences the asymmetry of the speech envelope.

  4. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  5. Represented Speech in Qualitative Health Research

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2017-01-01

    Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

  6. Speech Recognition: Its Place in Business Education.

    Science.gov (United States)

    Szul, Linda F.; Bouder, Michele

    2003-01-01

    Suggests uses of speech recognition devices in the classroom for students with disabilities. Compares speech recognition software packages and provides guidelines for selection and teaching. (Contains 14 references.) (SK)

  7. Speech input interfaces for anaesthesia records

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Andersen, Henning Boje

    2009-01-01

    Speech recognition as a medical transcript tool is now common in hospitals and is steadily increasing......Speech recognition as a medical transcript tool is now common in hospitals and is steadily increasing...

  8. Modeling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Dau, Torsten

    2012-01-01

    by the normal as well as impaired auditory system. Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII......) in conditions with nonlinearly processed speech. Instead of considering the reduction of the temporal modulation energy as the intelligibility metric, as assumed in the STI, the sEPSM applies the signal-to-noise ratio in the envelope domain (SNRenv). This metric was shown to be the key for predicting...... the intelligibility of reverberant speech as well as noisy speech processed by spectral subtraction. However, the sEPSM cannot account for speech subjected to phase jitter, a condition in which the spectral structure of speech is destroyed, while the broadband temporal envelope is kept largely intact. In contrast...

  9. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  10. The Role of Visual Speech Information in Supporting Perceptual Learning of Degraded Speech

    Science.gov (United States)

    Wayne, Rachel V.; Johnsrude, Ingrid S.

    2012-01-01

    Following cochlear implantation, hearing-impaired listeners must adapt to speech as heard through their prosthesis. Visual speech information (VSI; the lip and facial movements of speech) is typically available in everyday conversation. Here, we investigate whether learning to understand a popular auditory simulation of speech as transduced by a…

  11. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  12. The treatment of apraxia of speech : Speech and music therapy, an innovative joint effort

    NARCIS (Netherlands)

    Hurkmans, Josephus Johannes Stephanus

    2016-01-01

    Apraxia of Speech (AoS) is a neurogenic speech disorder. A wide variety of behavioural methods have been developed to treat AoS. Various therapy programmes use musical elements to improve speech production. A unique therapy programme combining elements of speech therapy and music therapy is called

  13. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  14. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  15. Visual context enhanced. The joint contribution of iconic gestures and visible speech to degraded speech comprehension.

    NARCIS (Netherlands)

    Drijvers, L.; Özyürek, A.

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech

  16. Inner Speech's Relationship with Overt Speech in Poststroke Aphasia

    Science.gov (United States)

    Stark, Brielle C.; Geva, Sharon; Warburton, Elizabeth A.

    2017-01-01

    Purpose: Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech…

  17. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  18. THE ONTOGENESIS OF SPEECH DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. E. Braudo

    2017-01-01

    Full Text Available The purpose of this article is to acquaint the specialists, working with children having developmental disorders, with age-related norms for speech development. Many well-known linguists and psychologists studied speech ontogenesis (logogenesis. Speech is a higher mental function, which integrates many functional systems. Speech development in infants during the first months after birth is ensured by the innate hearing and emerging ability to fix the gaze on the face of an adult. Innate emotional reactions are also being developed during this period, turning into nonverbal forms of communication. At about 6 months a baby starts to pronounce some syllables; at 7–9 months – repeats various sounds combinations, pronounced by adults. At 10–11 months a baby begins to react on the words, referred to him/her. The first words usually appear at an age of 1 year; this is the start of the stage of active speech development. At this time it is acceptable, if a child confuses or rearranges sounds, distorts or misses them. By the age of 1.5 years a child begins to understand abstract explanations of adults. Significant vocabulary enlargement occurs between 2 and 3 years; grammatical structures of the language are being formed during this period (a child starts to use phrases and sentences. Preschool age (3–7 y. o. is characterized by incorrect, but steadily improving pronunciation of sounds and phonemic perception. The vocabulary increases; abstract speech and retelling are being formed. Children over 7 y. o. continue to improve grammar, writing and reading skills. The described stages may not have strict age boundaries, as soon as they are dependent not only on environment, but also on the child’s mental constitution, heredity and character.

  19. Speech and Language Therapy/Pathology: Perspectives on a Gendered Profession

    Science.gov (United States)

    Litosseliti, Lia; Leadbeater, Claire

    2013-01-01

    Background: The speech and language therapy/pathology (SLT/SLP) profession is characterized by extreme "occupational sex segregation", a term used to refer to persistently male- or female-dominated professions. Men make up only 2.5% of all SLTs in the UK, and a similar imbalance is found in other countries. Despite calls to increase…

  20. The Influence of Syllable Onset Complexity and Syllable Frequency on Speech Motor Control

    Science.gov (United States)

    Riecker, Axel; Brendel, Bettina; Ziegler, Wolfram; Erb, Michael; Ackermann, Hermann

    2008-01-01

    Functional imaging studies have delineated a "minimal network for overt speech production," encompassing mesiofrontal structures (supplementary motor area, anterior cingulate gyrus), bilateral pre- and postcentral convolutions, extending rostrally into posterior parts of the inferior frontal gyrus (IFG) of the language-dominant hemisphere, left…

  1. Hate speech, report 1. Research on the nature and extent of hate speech

    OpenAIRE

    Nadim, Marjan; Fladmoe, Audun

    2016-01-01

    The purpose of this report is to gather research-based knowledge concerning: • the extent of online hate speech • which groups in society are particularly subjected to online hate speech • who produces hate speech, and what motivates them Hate speech is commonly understood as any speech that is persecutory, degrading or discriminatory on grounds of the recipient’s minority group identity. To be defined as hate speech, the speech must be conveyed publicly or in the presence of others and be di...

  2. Acoustic characteristics of ataxic speech in Japanese patients with spinocerebellar degeneration (SCD).

    Science.gov (United States)

    Ikui, Yukiko; Tsukuda, Mamoru; Kuroiwa, Yoshiyuki; Koyano, Shigeru; Hirose, Hajime; Taguchi, Takahide

    2012-01-01

    In English- and German-speaking countries, ataxic speech is often described as showing scanning based on acoustic impressions. Although the term 'scanning' is generally considered to represent abnormal speech features including prosodic excess or insufficiency, any precise acoustic analysis of ataxic speech has not been performed in Japanese-speaking patients. This raises the question of what is the most dominant acoustic characteristic of ataxic speech in Japanese subjects, particularly related to the perceptual impression of 'scanning'. The study was designed to investigate the nature of speech characteristics of Japanese ataxic subjects, particularly 'scanning', by means of acoustic analysis. The study comprised 20 Japanese cases with spinocerebellar degeneration diagnosed to have a perceptual impression of scanning by neurologists (ataxic group) and 20 age-matched normal healthy subjects (control group). Recordings of speech samples of Japanese test sentences were obtained from each subject. The recorded and digitized acoustic samples were analysed using 'Acoustic Core-8' (Arcadia Inc.). Sentence duration was significantly longer in the ataxic group as compared with the control group, indicating that the speaking rate was slower in the ataxic subjects. Segment duration remained consistent in both vowels and consonants in the control group as compared with the ataxic group. In particular, the duration of vowel segments, i.e. the nucleus of Japanese mora, was significantly invariable in the control group regardless of differences between subjects as well as in segments compared with the ataxic group. In addition, the duration of phonemically long Japanese vowels was significantly shorter in the ataxic group. The results indicate that the perceptual impression of 'scanning' in Japanese ataxic cases derives mainly from the breakdown of isochrony in terms of difficulty in keeping the length of vowel segments of Japanese invariable during speech production. In

  3. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  4. Discriminative learning for speech recognition

    CERN Document Server

    He, Xiadong

    2008-01-01

    In this book, we introduce the background and mainstream methods of probabilistic modeling and discriminative parameter optimization for speech recognition. The specific models treated in depth include the widely used exponential-family distributions and the hidden Markov model. A detailed study is presented on unifying the common objective functions for discriminative learning in speech recognition, namely maximum mutual information (MMI), minimum classification error, and minimum phone/word error. The unification is presented, with rigorous mathematical analysis, in a common rational-functio

  5. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  6. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    Science.gov (United States)

    Holzrichter, J.F.; Ng, L.C.

    1998-03-17

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.

  7. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    International Nuclear Information System (INIS)

    Holzrichter, J.F.; Ng, L.C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs

  8. Speech Algorithm Optimization at 16 KBPS.

    Science.gov (United States)

    1980-09-30

    9. M. D. Paez and T. H. Glisson, "Minimum Mean Squared-Error Quantization in Speech PCM and DPCM Systems," IEEE Trans. Communications, Vol. COM-20...34 IEEE Trans. Acoustic, Speech and Signal Processing, Vol. ASSP-27, June 1979. 13. N. S. Jayant, "Digital Coding of Speech Waveform: PCM, DPCM , and DM

  9. Speech Segmentation Using Bayesian Autoregressive Changepoint Detector

    Directory of Open Access Journals (Sweden)

    P. Sovka

    1998-12-01

    Full Text Available This submission is devoted to the study of the Bayesian autoregressive changepoint detector (BCD and its use for speech segmentation. Results of the detector application to autoregressive signals as well as to real speech are given. BCD basic properties are described and discussed. The novel two-step algorithm consisting of cepstral analysis and BCD for automatic speech segmentation is suggested.

  10. Development of binaural speech transmission index

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Drullman, R.

    2006-01-01

    Although the speech transmission index (STI) is a well-accepted and standardized method for objective prediction of speech intelligibility in a wide range of-environments and applications, it is essentially a monaural model. Advantages of binaural hearing to the intelligibility of speech are

  11. Interventions for Speech Sound Disorders in Children

    Science.gov (United States)

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  12. Regulation of speech in multicultural societies: introduction

    NARCIS (Netherlands)

    Maussen, M.; Grillo, R.

    2014-01-01

    What to do about speech which vilifies or defames members of minorities on the grounds of their ethnic or religious identity or their sexuality? How to respond to such speech, which may directly or indirectly cause harm, while taking into account the principle of free speech, has been much debated

  13. Speech Synthesis Applied to Language Teaching.

    Science.gov (United States)

    Sherwood, Bruce

    1981-01-01

    The experimental addition of speech output to computer-based Esperanto lessons using speech synthesized from text is described. Because of Esperanto's phonetic spelling and simple rhythm, it is particularly easy to describe the mechanisms of Esperanto synthesis. Attention is directed to how the text-to-speech conversion is performed and the ways…

  14. Cognitive functions in Childhood Apraxia of Speech

    NARCIS (Netherlands)

    Nijland, L.; Terband, H.; Maassen, B.

    2015-01-01

    Purpose: Childhood Apraxia of Speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional

  15. Cognitive Functions in Childhood Apraxia of Speech

    Science.gov (United States)

    Nijland, Lian; Terband, Hayo; Maassen, Ben

    2015-01-01

    Purpose: Childhood apraxia of speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional problems. Method: Cognitive functions were investigated…

  16. Epoch-based analysis of speech signals

    Indian Academy of Sciences (India)

    Epoch sequence is useful to manipulate prosody in speech synthesis applications. Accurate estimation of epochs helps in characterizing voice quality features. Epoch extraction also helps in speech enhancement and multispeaker separation. In this tutorial article, the importance of epochs for speech analysis is discussed, ...

  17. Speech and Debate as Civic Education

    Science.gov (United States)

    Hogan, J. Michael; Kurr, Jeffrey A.; Johnson, Jeremy D.; Bergmaier, Michael J.

    2016-01-01

    In light of the U.S. Senate's designation of March 15, 2016 as "National Speech and Debate Education Day" (S. Res. 398, 2016), it only seems fitting that "Communication Education" devote a special section to the role of speech and debate in civic education. Speech and debate have been at the heart of the communication…

  18. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  19. DEVELOPMENT AND DISORDERS OF SPEECH IN CHILDHOOD.

    Science.gov (United States)

    KARLIN, ISAAC W.; AND OTHERS

    THE GROWTH, DEVELOPMENT, AND ABNORMALITIES OF SPEECH IN CHILDHOOD ARE DESCRIBED IN THIS TEXT DESIGNED FOR PEDIATRICIANS, PSYCHOLOGISTS, EDUCATORS, MEDICAL STUDENTS, THERAPISTS, PATHOLOGISTS, AND PARENTS. THE NORMAL DEVELOPMENT OF SPEECH AND LANGUAGE IS DISCUSSED, INCLUDING THEORIES ON THE ORIGIN OF SPEECH IN MAN AND FACTORS INFLUENCING THE NORMAL…

  20. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    The second driving force is the impetus being provided by both government and industry for technologies to help break down domestic and international language barriers, these also being barriers to the expansion of policy and commerce. Speech-to-speech and speech-to-text translation are thus emerging as key ...

  1. Speech-in-Speech Recognition: A Training Study

    Science.gov (United States)

    Van Engen, Kristin J.

    2012-01-01

    This study aims to identify aspects of speech-in-noise recognition that are susceptible to training, focusing on whether listeners can learn to adapt to target talkers ("tune in") and learn to better cope with various maskers ("tune out") after short-term training. Listeners received training on English sentence recognition in…

  2. SPEECH ACT ANALYSIS: HOSNI MUBARAK'S SPEECHES IN PRE ...

    African Journals Online (AJOL)

    enerco

    Agbedo, C. U. Speech Act Analysis of Political discourse in the Nigerian Print Media in discourse. In Awka Journal of Languages & Linguistics Vol. 3, 2008a. Bloom field, L. Language. London: Allen & Unwin, 1933. Kay, M.W. Merriam Websters Collegiate Thesaurus, Masschusetts: Merrian. Webster Inc. 1988. Mbagwu, D.U. ...

  3. Relationship between Speech Intelligibility and Speech Comprehension in Babble Noise

    Science.gov (United States)

    Fontan, Lionel; Tardieu, Julien; Gaillard, Pascal; Woisard, Virginie; Ruiz, Robert

    2015-01-01

    Purpose: The authors investigated the relationship between the intelligibility and comprehension of speech presented in babble noise. Method: Forty participants listened to French imperative sentences (commands for moving objects) in a multitalker babble background for which intensity was experimentally controlled. Participants were instructed to…

  4. Pragmatic Study of Directive Speech Acts in Stories in Alquran

    Directory of Open Access Journals (Sweden)

    Rochmat Budi Santosa

    2016-10-01

    Full Text Available This study aims at describing the directive speech acts in the verses that contain the stories in the Qur'an. Specifically, the objectives of this study are to assess the sub directive speech acts contained in the verses of the stories and the dominant directive speech acts. The research target is the verses (ayat containing stories in the Qur’an. This study emphasizes the problem of finding the meaning of verses pragmatically. The data in this study are all expressions of verses about the stories in the Qur'an that contain directive speech acts. In addition, the data in the form of contexts behind the emergence of the verses in the Qur’an story also included. Data collection technique used is the reading and record techniques. The data analysis was conducted using content analysis. Analysis of the data by classifying directive speech acts into 6 (six categories of Bach and Harnish theory namely; requestives, questions, requirements, prohibitive, permissives, and advisories. The result is that the requestives speech act consist only 1 (one paragraph, namely sub-directive asking for patience. In sub-directive questions, there are 4 (four questions that have meaning to ask about what, question tag, why, asking for permission, who, where, which, possibilities, and offering. For sub-requirements directive there are 60 (sixty types of command. Pray command is the most number (24 verses and command for giving attention is the second position with 21 verses. About sub-directive prohibitives, we found 19 kinds of restrictions. As for permissives, there is only one (1 verse that allows punishment. In advisories that there are 2 kinds of advises, they are 1 verse that counsel for fear of punishment of God, and advise to be humble (1 verse. Thus it can be said that the stories in the Alquran really contain messages, including a message to the people to carry out the commands of God and away from His prohibition. The purpose is to crystallize the basic

  5. Highly dominating, highly authoritarian personalities.

    Science.gov (United States)

    Altemeyer, Bob

    2004-08-01

    The author considered the small part of the population whose members score highly on both the Social Dominance Orientation scale and the Right-Wing Authoritarianism scale. Studies of these High SDO-High RWAs, culled from samples of nearly 4000 Canadian university students and over 2600 of their parents and reported in the present article, reveal that these dominating authoritarians are among the most prejudiced persons in society. Furthermore, they seem to combine the worst elements of each kind of personality, being power-hungry, unsupportive of equality, manipulative, and amoral, as social dominators are in general, while also being religiously ethnocentric and dogmatic, as right-wing authoritarians tend to be. The author suggested that, although they are small in number, such persons can have considerable impact on society because they are well-positioned to become the leaders of prejudiced right-wing political movements.

  6. Multisensory integration of speech sounds with letters vs. visual speech: only visual speech induces the mismatch negativity.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean

    2018-03-14

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Speech Communication and Liberal Education.

    Science.gov (United States)

    Bradley, Bert E.

    1979-01-01

    Argues for the continuation of liberal education over career-oriented programs. Defines liberal education as one that develops abilities that transcend occupational concerns, and that enables individuals to cope with shifts in values, vocations, careers, and the environment. Argues that speech communication makes a significant contribution to…

  8. "Free Speech" and "Political Correctness"

    Science.gov (United States)

    Scott, Peter

    2016-01-01

    "Free speech" and "political correctness" are best seen not as opposing principles, but as part of a spectrum. Rather than attempting to establish some absolute principles, this essay identifies four trends that impact on this debate: (1) there are, and always have been, legitimate debates about the--absolute--beneficence of…

  9. Prosodic Contrasts in Ironic Speech

    Science.gov (United States)

    Bryant, Gregory A.

    2010-01-01

    Prosodic features in spontaneous speech help disambiguate implied meaning not explicit in linguistic surface structure, but little research has examined how these signals manifest themselves in real conversations. Spontaneously produced verbal irony utterances generated between familiar speakers in conversational dyads were acoustically analyzed…

  10. Neuronal basis of speech comprehension.

    Science.gov (United States)

    Specht, Karsten

    2014-01-01

    Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Paraconsistent semantics of speech acts

    NARCIS (Netherlands)

    Dunin-Kȩplicz, Barbara; Strachocka, Alina; Szałas, Andrzej; Verbrugge, Rineke

    2015-01-01

    This paper discusses an implementation of four speech acts: assert, concede, request and challenge in a paraconsistent framework. A natural four-valued model of interaction yields multiple new cognitive situations. They are analyzed in the context of communicative relations, which partially replace

  12. Speech recognition implementation in radiology

    International Nuclear Information System (INIS)

    White, Keith S.

    2005-01-01

    Continuous speech recognition (SR) is an emerging technology that allows direct digital transcription of dictated radiology reports. The SR systems are being widely deployed in the radiology community. This is a review of technical and practical issues that should be considered when implementing an SR system. (orig.)

  13. Fast Monaural Separation of Speech

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Dyrholm, Mads

    2003-01-01

    a Factorial Hidden Markov Model, with non-stationary assumptions on the source autocorrelations modelled through the Factorial Hidden Markov Model, leads to separation in the monaural case. By extending Hansens work we find that Roweis' assumptions are necessary for monaural speech separation. Furthermore we...

  14. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    automatic recognition of speech (ASR). Instead, likely for historical reasons, envelopes of power spectrum were adopted as main carrier of linguistic information in ASR. However, the relationships between phonetic values of sounds and their short-term spectral envelopes are not straightforward. Consequently, this asks for ...

  15. Gaucho Gazette: Speech and Sensationalism

    OpenAIRE

    Roberto José Ramos

    2013-01-01

    The Gaucho Gazette presents itself as a “popular newspaper”. Attempts to produce a denial about his aesthetic tabloid. Search only say that discloses what happens, as if the media were merely a reflection of society. This paper will seek to understand and explain your Sensationalism, through their speeches. Use for both, semiology, Roland Barthes, in their possibilities transdisciplinary.

  16. Gaucho Gazette: Speech and Sensationalism

    Directory of Open Access Journals (Sweden)

    Roberto José Ramos

    2013-07-01

    Full Text Available The Gaucho Gazette presents itself as a “popular newspaper”. Attempts to produce a denial about his aesthetic tabloid. Search only say that discloses what happens, as if the media were merely a reflection of society. This paper will seek to understand and explain your Sensationalism, through their speeches. Use for both, semiology, Roland Barthes, in their possibilities transdisciplinary.

  17. Acoustic Analysis of PD Speech

    Directory of Open Access Journals (Sweden)

    Karen Chenausky

    2011-01-01

    Full Text Available According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD, with roughly another 50,000 receiving new diagnoses each year. 70%–90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS substantially relieves motor symptoms in advanced-stage patients for whom medication produces disabling dyskinesias. This study investigated speech changes as a result of DBS settings chosen to maximize motor performance. The speech of 10 PD patients and 12 normal controls was analyzed for syllable rate and variability, syllable length patterning, vowel fraction, voice-onset time variability, and spirantization. These were normalized by the controls' standard deviation to represent distance from normal and combined into a composite measure. Results show that DBS settings relieving motor symptoms can improve speech, making it up to three standard deviations closer to normal. However, the clinically motivated settings evaluated here show greater capacity to impair, rather than improve, speech. A feedback device developed from these findings could be useful to clinicians adjusting DBS parameters, as a means for ensuring they do not unwittingly choose DBS settings which impair patients' communication.

  18. Pattern recognition in speech and language processing

    CERN Document Server

    Chou, Wu

    2003-01-01

    Minimum Classification Error (MSE) Approach in Pattern Recognition, Wu ChouMinimum Bayes-Risk Methods in Automatic Speech Recognition, Vaibhava Goel and William ByrneA Decision Theoretic Formulation for Adaptive and Robust Automatic Speech Recognition, Qiang HuoSpeech Pattern Recognition Using Neural Networks, Shigeru KatagiriLarge Vocabulary Speech Recognition Based on Statistical Methods, Jean-Luc GauvainToward Spontaneous Speech Recognition and Understanding, Sadaoki FuruiSpeaker Authentication, Qi Li and Biing-Hwang JuangHMMs for Language Processing Problems, Ri

  19. Speech perception of noise with binary gains

    DEFF Research Database (Denmark)

    Wang, DeLiang; Kjems, Ulrik; Pedersen, Michael Syskind

    2008-01-01

    For a given mixture of speech and noise, an ideal binary time-frequency mask is constructed by comparing speech energy and noise energy within local time-frequency units. It is observed that listeners achieve nearly perfect speech recognition from gated noise with binary gains prescribed...... by the ideal binary mask. Only 16 filter channels and a frame rate of 100 Hz are sufficient for high intelligibility. The results show that, despite a dramatic reduction of speech information, a pattern of binary gains provides an adequate basis for speech perception....

  20. Perceived Speech Quality Estimation Using DTW Algorithm

    Directory of Open Access Journals (Sweden)

    S. Arsenovski

    2009-06-01

    Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.

  1. Visual dominance in olfactory memory.

    Science.gov (United States)

    Batic, N; Gabassi, P G

    1987-08-01

    The object of the present study was to verify the emergence of a 'visual dominance' effect in memory tests involving different sensory modes (sight and smell), brought about the preattentive mechanisms which select the visual sensory mode regardless of the recall task.

  2. Vector-meson dominance revisited

    Directory of Open Access Journals (Sweden)

    Terschlüsen Carla

    2012-12-01

    Full Text Available The interaction of mesons with electromagnetism is often well described by the concept of vector-meson dominance (VMD. However, there are also examples where VMD fails. A simple chiral Lagrangian for pions, rho and omega mesons is presented which can account for the respective agreement and disagreement between VMD and phenomenology in the sector of light mesons.

  3. Hand dominance in orthopaedic surgeons.

    LENUS (Irish Health Repository)

    Lui, Darren F

    2012-08-01

    Handedness is perhaps the most studied human asymmetry. Laterality is the preference shown for one side and it has been studied in many aspects of medicine. Studies have shown that some orthopaedic procedures had poorer outcomes and identified laterality as a contributing factor. We developed a questionnaire to assess laterality in orthopaedic surgery and compared this to an established scoring system. Sixty-two orthopaedic surgeons surveyed with the validated Waterloo Handedness Questionnaire (WHQ) were compared with the self developed Orthopaedic Handedness Questionnaire (OHQ). Fifty-eight were found to be right hand dominant (RHD) and 4 left hand dominant (LHD). In RHD surgeons, the average WHQ score was 44.9% and OHQ 15%. For LHD surgeons the WHQ score was 30.2% and OHQ 9.4%. This represents a significant amount of time using the non dominant hand but does not necessarily determine satisfactory or successful dexterity transferable to the operating room. Training may be required for the non dominant side.

  4. Testing for Stochastic Dominance Efficiency

    NARCIS (Netherlands)

    G.T. Post (Thierry); O. Linton; Y-J. Whang

    2005-01-01

    textabstractWe propose a new test of the stochastic dominance efficiency of a given portfolio over a class of portfolios. We establish its null and alternative asymptotic properties, and define a method for consistently estimating critical values. We present some numerical evidence that our

  5. Optimal Wavelets for Speech Signal Representations

    Directory of Open Access Journals (Sweden)

    Shonda L. Walker

    2003-08-01

    Full Text Available It is well known that in many speech processing applications, speech signals are characterized by their voiced and unvoiced components. Voiced speech components contain dense frequency spectrum with many harmonics. The periodic or semi-periodic nature of voiced signals lends itself to Fourier Processing. Unvoiced speech contains many high frequency components and thus resembles random noise. Several methods for voiced and unvoiced speech representations that utilize wavelet processing have been developed. These methods seek to improve the accuracy of wavelet-based speech signal representations using adaptive wavelet techniques, superwavelets, which uses a linear combination of adaptive wavelets, gaussian methods and a multi-resolution sinusoidal transform approach to mention a few. This paper addresses the relative performance of these wavelet methods and evaluates the usefulness of wavelet processing in speech signal representations. In addition, this paper will also address some of the hardware considerations for the wavelet methods presented.

  6. Compressed Sensing Adaptive Speech Characteristics Research

    Directory of Open Access Journals (Sweden)

    Long Tao

    2014-09-01

    Full Text Available The sparsity of the speech signals is utilized in the DCT domain. According to the characteristics of the voice which may be separated into voiceless and voiced one, an adaptive measurement speech recovery method is proposed in this paper based on compressed sensing. First, the observed points are distributed based on the voicing energy ratio which the entire speech segment occupies. Then the speech segment is enflamed, if the frame is an unvoiced speech, the numbers of measurement can be allocated according to its zeros and energy rate. If the frame is voiced speech, the numbers of measurement can be allocated according to its energy. The experiment results shows that the performance of speech signal based on the method above is superior to utilize compress sensing directly.

  7. Training changes processing of speech cues in older adults with hearing loss

    Directory of Open Access Journals (Sweden)

    Samira eAnderson

    2013-11-01

    Full Text Available Aging results in a loss of sensory function, and the effects of hearing impairment can be especially devastating due to reduced communication ability. Older adults with hearing loss report that speech, especially in noisy backgrounds, is uncomfortably loud yet unclear. Hearing loss results in an unbalanced neural representation of speech: the slowly-varying envelope is enhanced, dominating representation in the auditory pathway and perceptual salience at the cost of the rapidly-varying fine structure. We hypothesized that older adults with hearing loss can be trained to compensate for these changes in central auditory processing through directed attention to behaviorally-relevant speech sounds. To that end, we evaluated the effects of auditory-cognitive training in older adults (ages 55-79 with normal hearing and hearing loss. After training, the auditory training group with hearing loss experienced a reduction in the neural representation of the speech envelope presented in noise, approaching levels observed in normal hearing older adults. No changes were noted in the control group. Importantly, changes in speech processing were accompanied by improvements in speech perception. Thus, central processing deficits associated with hearing loss may be partially remediated with training, resulting in real-life benefits for everyday communication.

  8. Training changes processing of speech cues in older adults with hearing loss.

    Science.gov (United States)

    Anderson, Samira; White-Schwoch, Travis; Choi, Hee Jae; Kraus, Nina

    2013-01-01

    Aging results in a loss of sensory function, and the effects of hearing impairment can be especially devastating due to reduced communication ability. Older adults with hearing loss report that speech, especially in noisy backgrounds, is uncomfortably loud yet unclear. Hearing loss results in an unbalanced neural representation of speech: the slowly-varying envelope is enhanced, dominating representation in the auditory pathway and perceptual salience at the cost of the rapidly-varying fine structure. We hypothesized that older adults with hearing loss can be trained to compensate for these changes in central auditory processing through directed attention to behaviorally-relevant speech sounds. To that end, we evaluated the effects of auditory-cognitive training in older adults (ages 55-79) with normal hearing and hearing loss. After training, the auditory training group with hearing loss experienced a reduction in the neural representation of the speech envelope presented in noise, approaching levels observed in normal hearing older adults. No changes were noted in the control group. Importantly, changes in speech processing were accompanied by improvements in speech perception. Thus, central processing deficits associated with hearing loss may be partially remediated with training, resulting in real-life benefits for everyday communication.

  9. Lateralization for Speech Predicts Therapeutic Response to Cognitive Behavioral Therapy for Depression

    OpenAIRE

    Kishon, Ronit; Abraham, Karen; Alschuler, Daniel M.; Keilp, John G.; Stewart, Jonathan W.; McGrath, Patrick J.; Bruder, Gerard E.

    2015-01-01

    A prior study (Bruder et al., 1997) found left hemisphere advantage for verbal dichotic listening was predictive of clinical response to cognitive behavioral therapy (CBT) for depression. This study aimed to confirm this finding and to examine the value of neuropsychological tests, which have shown promise for predicting antidepressant response. Twenty depressed patients who subsequently completed 14 weeks of CBT and 74 healthy adults were tested on a Dichotic Fused Words Test (DFWT). Patient...

  10. Speech Enhancement with Natural Sounding Residual Noise Based on Connected Time-Frequency Speech Presence Regions

    Directory of Open Access Journals (Sweden)

    Sørensen Karsten Vandborg

    2005-01-01

    Full Text Available We propose time-frequency domain methods for noise estimation and speech enhancement. A speech presence detection method is used to find connected time-frequency regions of speech presence. These regions are used by a noise estimation method and both the speech presence decisions and the noise estimate are used in the speech enhancement method. Different attenuation rules are applied to regions with and without speech presence to achieve enhanced speech with natural sounding attenuated background noise. The proposed speech enhancement method has a computational complexity, which makes it feasible for application in hearing aids. An informal listening test shows that the proposed speech enhancement method has significantly higher mean opinion scores than minimum mean-square error log-spectral amplitude (MMSE-LSA and decision-directed MMSE-LSA.

  11. Perception of speech sounds in school-age children with speech sound disorders

    Science.gov (United States)

    Preston, Jonathan L.; Irwin, Julia R.; Turcios, Jacqueline

    2015-01-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System (Rvachew, 1994), which has been effectively used to assess preschoolers’ ability to perform goodness judgments, is explored for school-age children with residual speech errors (RSE). However, data suggest that this particular task may not be sensitive to perceptual differences in school-age children. The need for the development of clinical tools for assessment of speech perception in school-age children with RSE is highlighted, and clinical suggestions are provided. PMID:26458198

  12. Perception of Speech Sounds in School-Aged Children with Speech Sound Disorders.

    Science.gov (United States)

    Preston, Jonathan L; Irwin, Julia R; Turcios, Jacqueline

    2015-11-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System, which has been effectively used to assess preschoolers' ability to perform goodness judgments, is explored for school-aged children with residual speech errors (RSEs). However, data suggest that this particular task may not be sensitive to perceptual differences in school-aged children. The need for the development of clinical tools for assessment of speech perception in school-aged children with RSE is highlighted, and clinical suggestions are provided. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... this issue, Tuomainen et al. (2005) used sine-wave speech stimuli created from three time-varying sine waves tracking the formants of a natural speech signal. Naïve observers tend not to recognize sine wave speech as speech but become able to decode its phonetic content when informed of the speech......-like nature of the signal. The sine-wave speech was dubbed onto congruent and incongruent video of a talking face. Tuomainen et al. found that the McGurk effect did not occur for naïve observers, but did occur when observers were informed. This indicates that the McGurk illusion is due to a mechanism...

  14. Adaptive redundant speech transmission over wireless multimedia sensor networks based on estimation of perceived speech quality.

    Science.gov (United States)

    Kang, Jin Ah; Kim, Hong Kook

    2011-01-01

    An adaptive redundant speech transmission (ARST) approach to improve the perceived speech quality (PSQ) of speech streaming applications over wireless multimedia sensor networks (WMSNs) is proposed in this paper. The proposed approach estimates the PSQ as well as the packet loss rate (PLR) from the received speech data. Subsequently, it decides whether the transmission of redundant speech data (RSD) is required in order to assist a speech decoder to reconstruct lost speech signals for high PLRs. According to the decision, the proposed ARST approach controls the RSD transmission, then it optimizes the bitrate of speech coding to encode the current speech data (CSD) and RSD bitstream in order to maintain the speech quality under packet loss conditions. The effectiveness of the proposed ARST approach is then demonstrated using the adaptive multirate-narrowband (AMR-NB) speech codec and ITU-T Recommendation P.563 as a scalable speech codec and the PSQ estimation, respectively. It is shown from the experiments that a speech streaming application employing the proposed ARST approach significantly improves speech quality under packet loss conditions in WMSNs.

  15. Commencement Speech as a Hybrid Polydiscursive Practice

    Directory of Open Access Journals (Sweden)

    Светлана Викторовна Иванова

    2017-12-01

    Full Text Available Discourse and media communication researchers pay attention to the fact that popular discursive and communicative practices have a tendency to hybridization and convergence. Discourse which is understood as language in use is flexible. Consequently, it turns out that one and the same text can represent several types of discourses. A vivid example of this tendency is revealed in American commencement speech / commencement address / graduation speech. A commencement speech is a speech university graduates are addressed with which in compliance with the modern trend is delivered by outstanding media personalities (politicians, athletes, actors, etc.. The objective of this study is to define the specificity of the realization of polydiscursive practices within commencement speech. The research involves discursive, contextual, stylistic and definitive analyses. Methodologically the study is based on the discourse analysis theory, in particular the notion of a discursive practice as a verbalized social practice makes up the conceptual basis of the research. This research draws upon a hundred commencement speeches delivered by prominent representatives of American society since 1980s till now. In brief, commencement speech belongs to institutional discourse public speech embodies. Commencement speech institutional parameters are well represented in speeches delivered by people in power like American and university presidents. Nevertheless, as the results of the research indicate commencement speech institutional character is not its only feature. Conceptual information analysis enables to refer commencement speech to didactic discourse as it is aimed at teaching university graduates how to deal with challenges life is rich in. Discursive practices of personal discourse are also actively integrated into the commencement speech discourse. More than that, existential discursive practices also find their way into the discourse under study. Commencement

  16. Activation of dominant hemisphere association cortex during naming as a function of cognitive performance in mild traumatic brain injury: Insights into mechanisms of lexical access

    Directory of Open Access Journals (Sweden)

    Mihai Popescu

    2017-01-01

    Full Text Available Patients with a history of mild traumatic brain injury (mTBI and objective cognitive deficits frequently experience word finding difficulties in normal conversation. We sought to improve our understanding of this phenomenon by determining if the scores on standardized cognitive testing are correlated with measures of brain activity evoked in a word retrieval task (confrontational picture naming. The study participants (n = 57 were military service members with a history of mTBI. The General Memory Index (GMI determined after administration of the Rivermead Behavioral Memory Test, Third Edition, was used to assign subjects to three groups: low cognitive performance (Group 1: GMI ≤ 87, n = 18, intermediate cognitive performance (Group 2: 88 ≤ GMI ≤ 99, n = 18, and high cognitive performance (Group 3: GMI ≥ 100, n = 21. Magnetoencephalography data were recorded while participants named eighty pictures of common objects. Group differences in evoked cortical activity were observed relatively early (within 200 ms from picture onset over a distributed network of left hemisphere cortical regions including the fusiform gyrus, the entorhinal and parahippocampal cortex, the supramarginal gyrus and posterior part of the superior temporal gyrus, and the inferior frontal and rostral middle frontal gyri. Differences were also present in bilateral cingulate cortex and paracentral lobule, and in the right fusiform gyrus. All differences reflected a lower amplitude of the evoked responses for Group 1 relative to Groups 2 and 3. These findings may indicate weak afferent inputs to and within an extended cortical network including association cortex of the dominant hemisphere in patients with low cognitive performance. The association between word finding difficulties and low cognitive performance may therefore be the result of a diffuse pathophysiological process affecting distributed neuronal networks serving a wide range of cognitive

  17. Performance of language tasks in patients with ruptured aneurysm of the left hemisphere worses in the post-surgical evaluation

    Directory of Open Access Journals (Sweden)

    Ana Cláudia C. Vieira

    2016-08-01

    Full Text Available ABSTRACT Sub-arachnoid hemorrhage (SAH promotes impairment of upper cortical functions. However, few information is available emphasizing changes in language after aneurismal SAH and aneurysm location influence. Objective To assess the language and verbal fluency performance in aneurismal SAH pre- and post-surgery in patients caused by an aneurysm of the anterior communicating artery (AcomA, left middle cerebral artery (L-MCA and left posterior comunicating artery (L-PcomA. Methods Assessment in 79 patients with SAH, on two occasions: pre- and post surgical treatment. They were divided into three groups by the aneurysms’ location. Results Deterioration is detected in the performance of all patients during the post-surgical period; L-MCA aneurysm patients displayed a reduction in verbal naming and fluency; L-PcomA patients deteriorated in the written language and fluency tasks. Conclusion After the surgical procedure the patients decreased in various language tasks and these differences in performance being directly related to the location of the aneurysm.

  18. Hemispheric specificity for proprioception: Postural control of standing following right or left hemisphere damage during ankle tendon vibration.

    Science.gov (United States)

    Duclos, Noémie C; Maynard, Luc; Abbas, Djawad; Mesure, Serge

    2015-11-02

    Right brain damage (RBD) following stroke often causes significant postural instability. In standing (without vision), patients with RBD are more unstable than those with left brain damage (LBD). We hypothesised that this postural instability would relate to the cortical integration of proprioceptive afferents. The aim of this study was to use tendon vibration to investigate whether these changes were specific to the paretic or non-paretic limbs. 14 LBD, 12 RBD patients and 20 healthy subjects were included. Displacement of the Centre of Pressure (CoP) was recorded during quiet standing, then during 3 vibration conditions (80 Hz - 20s): paretic limb, non-paretic limb (left and right limbs for control subjects) and bilateral. Vibration was applied separately to the peroneal and Achilles tendons. Mean antero-posterior position of the CoP, variability and velocity were calculated before (4s), during and after (24s) vibration. For all parameters, the strongest perturbation was during Achilles vibrations. The Achilles non-paretic condition induced a larger backward displacement than the Achilles paretic condition. This condition caused specific behaviour on the velocity: the LBD group was perturbed at the onset of the vibrations, but gradually recovered their stability; the RBD group was significantly perturbed thereafter. After bilateral Achilles vibration, RBD patients required the most time to restore initial posture. The reduction in use of information from the paretic limb may be a central strategy to deal with risk-of-fall situations such as during Achilles vibration. The postural behaviour is profoundly altered by lesions of the right hemisphere when proprioception is perturbed. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Bilingualism yields language-specific plasticity in left hemisphere's circuitry for learning to read in young children.

    Science.gov (United States)

    Jasińska, K K; Berens, M S; Kovelman, I; Petitto, L A

    2017-04-01

    How does bilingual exposure impact children's neural circuitry for learning to read? Theories of bilingualism suggests that exposure to two languages may yield a functional and neuroanatomical adaptation to support the learning of two languages (Klein et al., 2014). To test the hypothesis that this neural adaptation may vary as a function of structural and orthographic characteristics of bilinguals' two languages, we compared Spanish-English and French-English bilingual children, and English monolingual children, using functional Near Infrared Spectroscopy neuroimaging (fNIRS, ages 6-10, N =26). Spanish offers consistent sound-to-print correspondences ("phonologically transparent" or "shallow"); such correspondences are more opaque in French and even more opaque in English (which has both transparent and "phonologically opaque" or "deep" correspondences). Consistent with our hypothesis, both French- and Spanish-English bilinguals showed hyperactivation in left posterior temporal regions associated with direct sound-to-print phonological analyses and hypoactivation in left frontal regions associated with assembled phonology analyses. Spanish, but not French, bilinguals showed a similar effect when reading Irregular words. The findings inform theories of bilingual and cross-linguistic literacy acquisition by suggesting that structural characteristics of bilinguals' two languages and their orthographies have a significant impact on children's neuro-cognitive architecture for learning to read. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Another look at category effects on colour perception and their left hemispheric lateralisation: no evidence from a colour identification task.

    Science.gov (United States)

    Suegami, Takashi; Aminihajibashi, Samira; Laeng, Bruno

    2014-05-01

    The present study aimed to replicate category effects on colour perception and their lateralisation to the left cerebral hemisphere (LH). Previous evidence for lateralisation of colour category effects has been obtained with tasks where a differently coloured target was searched within a display and participants reported the lateral location of the target. However, a left/right spatial judgment may yield LH-laterality effects per se. Thus, we employed an identification task that does not require a spatial judgment and used the same colour set that previously revealed LH-lateralised category effects. The identification task was better performed with between-category colours than with within-category task both in terms of accuracy and latency, but such category effects were bilateral or RH-lateralised, and no evidence was found for LH-laterality effects. The accuracy scores, moreover, indicated that the category effects derived from low sensitivities for within-blue colours and did not reflect the effects of categorical structures on colour perception. Furthermore, the classic "category effects" were observed in participants' response biases, instead of sensitivities. The present results argue against both the LH-lateralised category effects on colour perception and the existence of colour category effects per se.

  1. Left inferior frontal gyrus mediates morphosyntax: ERP evidence from verb processing in left-hemisphere damaged patients.

    Science.gov (United States)

    Regel, Stefanie; Kotz, Sonja A; Henseler, Ilona; Friederici, Angela D

    2017-01-01

    Neurocognitive models of language comprehension have proposed different mechanisms with different neural substrates mediating human language processing. Whether the left inferior frontal gyrus (LIFG) is engaged in morpho-syntactic information processing is currently still controversially debated. The present study addresses this issue by examining the processing of irregular verb inflection in real words (e.g., swim > swum > swam) and pseudowords (e.g., frim > frum > fram) by using event-related brain potentials (ERPs) in neurological patients with lesions in the LIFG involving Broca's area as well as healthy controls. Different ERP patterns in response to the grammatical violations were observed in both groups. Controls showed a biphasic negativity-P600 pattern in response to incorrect verb inflections whereas patients with LIFG lesions displayed a N400. For incorrect pseudoword inflections, a late positivity was found in controls, while no ERP effects were obtained in patients. These findings of different ERP patterns in the two groups strongly indicate an involvement of LIFG in morphosyntactic processing, thereby suggesting brain regions' specialization for different language functions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. From nature-dominated to human-dominated environmental changes

    Science.gov (United States)

    Messerli, Bruno; Grosjean, Martin; Hofer, Thomas; Núñez, Lautaro; Pfister, Christian

    2000-01-01

    To what extent is it realistic and useful to view human history as a sequence of changes from highly vulnerable societies of hunters and gatherers through periods with less vulnerable, well buffered and highly productive agrarian-urban societies to a world with regions of extreme overpopulation and overuse of life support systems, so that vulnerability to climatic-environmental changes and extreme events is again increasing? This question cannot be fully answered in our present state of knowledge, but at least we can try to illustrate, with three case studies from different continents, time periods and ecosystems, some fundamental changes in the relationship between natural processes and human activities that occur, as we pass from a nature-dominated to a human dominated environment. 1. Early-mid Holocene: Nature dominated environment — human adaptation, mitigation, and migration. In the central Andes, the Holocene climate changed from humid (10,800-8000 BP) to extreme arid (8000-3600 BP) conditions. Over the same period, prehistoric hunting communities adopted a more sedentary pattern of resource use by settling close to the few perennial water bodies, where they began the process of domesticating camelids around 5000 BP and irrigation from about 3100 BP. 2. Historical period: An agrarian society in transition from an "enduring" to an innovative human response. Detailed documentary evidence from Western Europe may be used to reconstruct quite precisely the impacts of climatic variations on agrarian societies. The period considered spans a major transition from an apparently passive response to the vagaries of the environment during the 16th century to an active and innovative attitude from the onset of the agrarian revolution in the late 18th century through to the present day. The associated changes in technology and in agricultural practices helped to create a society better able to survive the impact of climatic extremes. 3. The present day: A human dominated

  3. Speech-rhythm characteristics of client-centered, Gestalt, and rational-emotive therapy interviews.

    Science.gov (United States)

    Chen, C L

    1981-07-01

    The aim of this study was to discover whether client-centered, Gestalt, and rational-emotive psychotherapy interviews could be described and differentiated on the basis of quantitative measurement of their speech rhythms. These measures were taken from the sound portion of a film showing interviews by Carl Rogers, Frederick Perls, and Albert Ellis. The variables used were total session and percentage of speaking times, speaking turns, vocalizations, interruptions, inside and switching pauses, and speaking rates. The three types of interview had very distinctive patterns of speech-rhythm variables. These patterns suggested that Rogers's Client-centered therapy interview was patient dominated, that Ellis's rational-emotive therapy interview was therapist dominated, and that Perls's Gestalt therapy interview was neither therapist nor patient dominated.

  4. Connected domination stable graphs upon edge addition ...

    African Journals Online (AJOL)

    A set S of vertices in a graph G is a connected dominating set of G if S dominates G and the subgraph induced by S is connected. We study the graphs for which adding any edge does not change the connected domination number. Keywords: Connected domination, connected domination stable, edge addition ...

  5. But is it speech? Making critical sense of the dominant constitutional ...

    African Journals Online (AJOL)

    Under the pervasive influence of United States First Amendment jurisprudence, adult gender-specific sexually explicit (or “pornographic”) material is conceptualized, and thus protected in the “marketplace of ideas”, as a particular mode of expression; to be viewed as part of the fabric of an open, free and democratic society.

  6. An analysis of the masking of speech by competing speech using self-report data.

    Science.gov (United States)

    Agus, Trevor R; Akeroyd, Michael A; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.

  7. Speech-in-speech perception and executive function involvement.

    Directory of Open Access Journals (Sweden)

    Marcela Perrone-Bertolotti

    Full Text Available This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation. The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities.

  8. Human rights or security? Positions on asylum in European Parliament speeches

    DEFF Research Database (Denmark)

    Frid-Nielsen, Snorre Sylvester

    2018-01-01

    This study examines speeches in the European Parliament relating to asylum. Conceptually, it tests hypotheses concerning the relation between national parties and Members of European Parliament (MEPs). The computer-based content analysis method Wordfish is used to examine 876 speeches from 2004......-2014, scaling MEPs along a unidimensional policy space. Debates on asylum predominantly concern positions for or against European Union (EU) security measures. Surprisingly, national party preferences for EU integration were not the dominant factor. The strongest predictors of MEPs' positions are their national...

  9. On dominator colorings in graphs

    Indian Academy of Sciences (India)

    Graph coloring and domination are two major areas in graph theory that have been ... independent set if no two vertices in S are adjacent. ... independent set. The corona G1 ◦ G2 of two graphs G1 and G2 is defined to be the graph. G obtained by taking one copy of G1 and |V(G1)| copies of G2, and then joining the i-th.

  10. Untangling Partnership and Domination Morality

    Directory of Open Access Journals (Sweden)

    David Loye

    2015-06-01

    Full Text Available Riane Eisler’s (1987 cultural transformation theory is an effective framework for understanding many of the constructs that shape society. This article uses Eisler’s theory to explain the formation of morality and the construction of conscience. It contrasts partnership morality and domination morality, and describes the factors that shape our tendency to embrace one or the other. The article helps us understand that we have a choice, and invites us to choose partnership morality.

  11. Individual differneces in degraded speech perception

    Science.gov (United States)

    Carbonell, Kathy M.

    One of the lasting concerns in audiology is the unexplained individual differences in speech perception performance even for individuals with similar audiograms. One proposal is that there are cognitive/perceptual individual differences underlying this vulnerability and that these differences are present in normal hearing (NH) individuals but do not reveal themselves in studies that use clear speech produced in quiet (because of a ceiling effect). However, previous studies have failed to uncover cognitive/perceptual variables that explain much of the variance in NH performance on more challenging degraded speech tasks. This lack of strong correlations may be due to either examining the wrong measures (e.g., working memory capacity) or to there being no reliable differences in degraded speech performance in NH listeners (i.e., variability in performance is due to measurement noise). The proposed project has 3 aims; the first, is to establish whether there are reliable individual differences in degraded speech performance for NH listeners that are sustained both across degradation types (speech in noise, compressed speech, noise-vocoded speech) and across multiple testing sessions. The second aim is to establish whether there are reliable differences in NH listeners' ability to adapt their phonetic categories based on short-term statistics both across tasks and across sessions; and finally, to determine whether performance on degraded speech perception tasks are correlated with performance on phonetic adaptability tasks, thus establishing a possible explanatory variable for individual differences in speech perception for NH and hearing impaired listeners.

  12. Some articulatory details of emotional speech

    Science.gov (United States)

    Lee, Sungbok; Yildirim, Serdar; Bulut, Murtaza; Kazemzadeh, Abe; Narayanan, Shrikanth

    2005-09-01

    Differences in speech articulation among four emotion types, neutral, anger, sadness, and happiness are investigated by analyzing tongue tip, jaw, and lip movement data collected from one male and one female speaker of American English. The data were collected using an electromagnetic articulography (EMA) system while subjects produce simulated emotional speech. Pitch, root-mean-square (rms) energy and the first three formants were estimated for vowel segments. For both speakers, angry speech exhibited the largest rms energy and largest articulatory activity in terms of displacement range and movement speed. Happy speech is characterized by largest pitch variability. It has higher rms energy than neutral speech but articulatory activity is rather comparable to, or less than, neutral speech. That is, happy speech is more prominent in voicing activity than in articulation. Sad speech exhibits longest sentence duration and lower rms energy. However, its articulatory activity is no less than neutral speech. Interestingly, for the male speaker, articulation for vowels in sad speech is consistently more peripheral (i.e., more forwarded displacements) when compared to other emotions. However, this does not hold for female subject. These and other results will be discussed in detail with associated acoustics and perceived emotional qualities. [Work supported by NIH.

  13. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  14. Relative Salience of Speech Rhythm and Speech Rate on Perceived Foreign Accent in a Second Language.

    Science.gov (United States)

    Polyanskaya, Leona; Ordin, Mikhail; Busa, Maria Grazia

    2017-09-01

    We investigated the independent contribution of speech rate and speech rhythm to perceived foreign accent. To address this issue we used a resynthesis technique that allows neutralizing segmental and tonal idiosyncrasies between identical sentences produced by French learners of English at different proficiency levels and maintaining the idiosyncrasies pertaining to prosodic timing patterns. We created stimuli that (1) preserved the idiosyncrasies in speech rhythm while controlling for the differences in speech rate between the utterances; (2) preserved the idiosyncrasies in speech rate while controlling for the differences in speech rhythm between the utterances; and (3) preserved the idiosyncrasies both in speech rate and speech rhythm. All the stimuli were created in intoned (with imposed intonational contour) and flat (with monotonized, constant F0) conditions. The original and the resynthesized sentences were rated by native speakers of English for degree of foreign accent. We found that both speech rate and speech rhythm influence the degree of perceived foreign accent, but the effect of speech rhythm is larger than that of speech rate. We also found that intonation enhances the perception of fine differences in rhythmic patterns but reduces the perceptual salience of fine differences in speech rate.

  15. Aerosol emission during human speech

    Science.gov (United States)

    Asadi, Sima; Wexler, Anthony S.; Cappa, Christopher D.; Bouvier, Nicole M.; Barreda-Castanon, Santiago; Ristenpart, William D.

    2017-11-01

    We show that the rate of aerosol particle emission during healthy human speech is strongly correlated with the loudness (amplitude) of vocalization. Emission rates range from approximately 1 to 50 particles per second for quiet to loud amplitudes, regardless of language spoken (English, Spanish, Mandarin, or Arabic). Intriguingly, a small fraction of individuals behave as ``super emitters,'' consistently emitting an order of magnitude more aerosol particles than their peers. We interpret the results in terms of the eggressive flowrate during vocalization, which is known to vary significantly for different types of vocalization and for different individuals. The results suggest that individual speech patterns could affect the probability of airborne disease transmission. The results also provide a possible explanation for the existence of ``super spreaders'' who transmit pathogens much more readily than average and who play a key role in the spread of epidemics.

  16. Status Report on Speech Research

    Science.gov (United States)

    1992-06-01

    W. Huktijn (Eds.), Speeh motor dynamics Kugler, P. N., Kelso, J. A. S., & Turvey, M. T. (1982). On the in stuttering (pp. 57-76). Wien Springer-Verlag...Eds.), Speech motor dynamics Churchland, P. M. (1989). A neurocomputational perspective- The in stuttering (pp. 57-76). Wien: Springer-Verlag. nature...experience. The first group consisted of four university-level teachers , who were relatively experienced learners of French, and the second group of

  17. Extensions to the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…

  18. Segmenting Words from Natural Speech: Subsegmental Variation in Segmental Cues

    Science.gov (United States)

    Rytting, C. Anton; Brew, Chris; Fosler-Lussier, Eric

    2010-01-01

    Most computational models of word segmentation are trained and tested on transcripts of speech, rather than the speech itself, and assume that speech is converted into a sequence of symbols prior to word segmentation. We present a way of representing speech corpora that avoids this assumption, and preserves acoustic variation present in speech. We…

  19. Speech Prosody in Persian Language

    Directory of Open Access Journals (Sweden)

    Maryam Nikravesh

    2014-05-01

    Full Text Available Background: In verbal communication in addition of semantic and grammatical aspects, includes: vocabulary, syntax and phoneme, some special voice characteristics were use that called speech prosody. Speech prosody is one of the important factors of communication which includes: intonation, duration, pitch, loudness, stress, rhythm and etc. The aim of this survey is studying some factors of prosody as duration, fundamental frequency range and intonation contour. Materials and Methods: This study is performed with cross-sectional and descriptive-analytic approach. The participants include 134 male and female between 18-30 years old who normally speak Persian. Two sentences include: an interrogative and one declarative sentence were studied. Voice samples were analyzed by Dr. Speech software (real analysis software and data were analyzed by statistical test of unilateral variance analysis and in depended T test, and intonation contour was drawn for sentences. Results: Mean of duration between kinds of sentences had a significant difference. Mean of duration had significant difference between female and male. Fundamental frequency range between kinds of sentences had not significant difference. Fundamental frequency range in female is higher than male. Conclusion: Duration is an affective factor in Persian prosody. The higher fundamental frequency range in female is because of different anatomical and physiological mechanisms in phonation system. In addition higher fundamental frequency range in female is the result of an authority of language use in Farsi female. The end part of intonation contour in yes/no question is rising, in declarative sentence is falling.

  20. Dominant Maneuver: The Art of the Possible

    National Research Council Canada - National Science Library

    Brozenick, Norman

    1997-01-01

    ...) dominant maneuver, (2) precision engagement, (3) full dimensional protection, and (4) focused logistics. This paper defines dominant maneuver as American maneuver warfare for the early 21st century...

  1. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: Introduction

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The goal of this article is to introduce the pause marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech (CAS) from speech delay.

  2. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  3. Recent advances in nonlinear speech processing

    CERN Document Server

    Faundez-Zanuy, Marcos; Esposito, Antonietta; Cordasco, Gennaro; Drugman, Thomas; Solé-Casals, Jordi; Morabito, Francesco

    2016-01-01

    This book presents recent advances in nonlinear speech processing beyond nonlinear techniques. It shows that it exploits heuristic and psychological models of human interaction in order to succeed in the implementations of socially believable VUIs and applications for human health and psychological support. The book takes into account the multifunctional role of speech and what is “outside of the box” (see Björn Schuller’s foreword). To this aim, the book is organized in 6 sections, each collecting a small number of short chapters reporting advances “inside” and “outside” themes related to nonlinear speech research. The themes emphasize theoretical and practical issues for modelling socially believable speech interfaces, ranging from efforts to capture the nature of sound changes in linguistic contexts and the timing nature of speech; labors to identify and detect speech features that help in the diagnosis of psychological and neuronal disease, attempts to improve the effectiveness and performa...

  4. Speech Intelligibility Evaluation for Mobile Phones

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Cubick, Jens; Dau, Torsten

    2015-01-01

    In the development process of modern telecommunication systems, such as mobile phones, it is common practice to use computer models to objectively evaluate the transmission quality of the system, instead of time-consuming perceptual listening tests. Such models have typically focused on the quality...... of the transmitted speech, while little or no attention has been provided to speech intelligibility. The present study investigated to what extent three state-of-the art speech intelligibility models could predict the intelligibility of noisy speech transmitted through mobile phones. Sentences from the Danish...... Dantale II speech material were mixed with three different kinds of background noise, transmitted through three different mobile phones, and recorded at the receiver via a local network simulator. The speech intelligibility of the transmitted sentences was assessed by six normal-hearing listeners...

  5. Acquirement and enhancement of remote speech signals

    Science.gov (United States)

    Lü, Tao; Guo, Jin; Zhang, He-yong; Yan, Chun-hui; Wang, Can-jin

    2017-07-01

    To address the challenges of non-cooperative and remote acoustic detection, an all-fiber laser Doppler vibrometer (LDV) is established. The all-fiber LDV system can offer the advantages of smaller size, lightweight design and robust structure, hence it is a better fit for remote speech detection. In order to improve the performance and the efficiency of LDV for long-range hearing, the speech enhancement technology based on optimally modified log-spectral amplitude (OM-LSA) algorithm is used. The experimental results show that the comprehensible speech signals within the range of 150 m can be obtained by the proposed LDV. The signal-to-noise ratio ( SNR) and mean opinion score ( MOS) of the LDV speech signal can be increased by 100% and 27%, respectively, by using the speech enhancement technology. This all-fiber LDV, which combines the speech enhancement technology, can meet the practical demand in engineering.

  6. Primary progressive aphasia and apraxia of speech.

    Science.gov (United States)

    Jung, Youngsin; Duffy, Joseph R; Josephs, Keith A

    2013-09-01

    Primary progressive aphasia is a neurodegenerative syndrome characterized by progressive language dysfunction. The majority of primary progressive aphasia cases can be classified into three subtypes: nonfluent/agrammatic, semantic, and logopenic variants. Each variant presents with unique clinical features, and is associated with distinctive underlying pathology and neuroimaging findings. Unlike primary progressive aphasia, apraxia of speech is a disorder that involves inaccurate production of sounds secondary to impaired planning or programming of speech movements. Primary progressive apraxia of speech is a neurodegenerative form of apraxia of speech, and it should be distinguished from primary progressive aphasia given its discrete clinicopathological presentation. Recently, there have been substantial advances in our understanding of these speech and language disorders. The clinical, neuroimaging, and histopathological features of primary progressive aphasia and apraxia of speech are reviewed in this article. The distinctions among these disorders for accurate diagnosis are increasingly important from a prognostic and therapeutic standpoint. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  7. Modelling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    Jørgensen and Dau (J Acoust Soc Am 130:1475-1487, 2011) proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII) in conditions with nonlinearly processed speech....... Instead of considering the reduction of the temporal modulation energy as the intelligibility metric, as assumed in the STI, the sEPSM applies the signal-to-noise ratio in the envelope domain (SNRenv). This metric was shown to be the key for predicting the intelligibility of reverberant speech as well...... subjected to phase jitter, a condition in which the spectral structure of the intelligibility of speech signal is strongly affected, while the broadband temporal envelope is kept largely intact. In contrast, the effects of this distortion can be predicted -successfully by the spectro-temporal modulation...

  8. WMS-III performance in epilepsy patients following temporal lobectomy.

    Science.gov (United States)

    Doss, Robert C; Chelune, Gordon J; Naugle, Richard I

    2004-03-01

    We examined performances on the Wechsler Memory Scale-3rd Edition (WMS-III) among patients who underwent temporal lobectomy for the control of medically intractable epilepsy. There were 51 right (RTL) and 56 left (LTL) temporal lobectomy patients. All patients were left hemisphere speech-dominant. The LTL and RTL patients were comparable in terms of general demographic, epilepsy, and intellectual/attention factors. Multivariate analyses revealed a significant crossover interaction (p WMS-III is sensitive to modality-specific memory performance associated with unilateral temporal lobectomy.

  9. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  10. 75 FR 26701 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-05-12

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... formulas and funding requirement estimates for the Interstate Telecommunications Relay Services (TRS) Fund... costs of providing VRS. Telecommunications Relay Services and Speech-to-Speech Services for Individuals...

  11. An analysis of the masking of speech by competing speech using self-report data (L)

    OpenAIRE

    Agus, Trevor R.; Akeroyd, Michael A.; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the “Speech, Spatial, and Qualities of Hearing” scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol.43, 85–99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively ...

  12. PERSON DEIXIS IN USA PRESIDENTIAL CAMPAIGN SPEECHES

    OpenAIRE

    Nanda Anggarani Putri; Eri Kurniawan

    2015-01-01

    This study investigates the use of person deixis in presidential campaign speeches. This study is important because the use of person deixis in political speeches has been proved by many studies to give significant effects to the audience. The study largely employs a descriptive qualitative method. However, it also employs a simple quantitative method in calculating the number of personal pronouns used in the speeches and their percentages. The data for the study were collected from the trans...

  13. CAR2 - Czech Database of Car Speech

    Directory of Open Access Journals (Sweden)

    P. Sovka

    1999-12-01

    Full Text Available This paper presents new Czech language two-channel (stereo speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  14. CAR2 - Czech Database of Car Speech

    OpenAIRE

    Pollak, P.; Vopicka, J.; Hanzl, V.; Sovka, Pavel

    1999-01-01

    This paper presents new Czech language two-channel (stereo) speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  15. Semi-Automated Speech Transcription System Study

    Science.gov (United States)

    1994-08-31

    System) program and was trained on the Wall Street Journal task (described in [recogl], [recog2] and [recog3]). This speech recognizer is a time...quality of Wall Street Journal data (very high) and SWITCHBOARD data (poor), but also because the type of speech in broadcast data is also somewhere...between extremes of read text (the Wall Street Journal data) and spontaneous speech (SWITCHBOARD data). Dragon Systems’ SWITCHBOARD recognizer obtained a

  16. Sparsity in Linear Predictive Coding of Speech

    OpenAIRE

    Giacobello, Daniele

    2010-01-01

    This thesis deals with developing improved techniques for speech coding based on the recent developments in sparse signal representation. In particular, this work is motivated by the need to address some of the limitations of the well- known linear prediction (LP) model currently applied in many modern speech coders. In the first part of the thesis, we provide an overview of Sparse Linear Predic- tion, a set of speech processing tools created by introducing sparsity constraints into the LP fr...

  17. Acquisition of speech rhythm in first language.

    Science.gov (United States)

    Polyanskaya, Leona; Ordin, Mikhail

    2015-09-01

    Analysis of English rhythm in speech produced by children and adults revealed that speech rhythm becomes increasingly more stress-timed as language acquisition progresses. Children reach the adult-like target by 11 to 12 years. The employed speech elicitation paradigm ensured that the sentences produced by adults and children at different ages were comparable in terms of lexical content, segmental composition, and phonotactic complexity. Detected differences between child and adult rhythm and between rhythm in child speech at various ages cannot be attributed to acquisition of phonotactic language features or vocabulary, and indicate the development of language-specific phonetic timing in the course of acquisition.

  18. From Persuasive to Authoritative Speech Genres

    DEFF Research Database (Denmark)

    Nørreklit, Hanne; Scapens, Robert

    2014-01-01

    by a professional editor in the USA before it was published. Design/methodology/approach: The paper analyses the "persuasive" speech genre of the original version and the "authoritative" speech genre of the published version. Findings: Although is was initially thought that the differences between the two versions......, the authors have focused on just one instance in which a text written by academics was re-written for publication in a practitioner journal. Originality/value: The paper contrasts the rationalism of the persuasive speech genre and the pragmatism of the authoritative speech genre. It cautions academic...

  19. Preoperative mapping of speech-eloquent areas with functional magnetic resonance imaging (fMRI): comparison of different task designs

    International Nuclear Information System (INIS)

    Prothmann, S.; Zimmer, C.; Puccini, S.; Dalitz, B.; Kuehn, A.; Kahn, T.; Roedel, L.

    2005-01-01

    Purpose: Functional magnetic resonance imaging (fMRI) is a well-established, non-invasive method for pre-operative mapping of speech-eloquent areas. This investigation tests three simple paradigms to evaluate speech lateralisation and visualisation of speech-eloquent areas. Materials and Methods: 14 healthy volunteers and 16 brain tumour patients were given three tasks: to enumerate months in the correct order (EM), to generate verbs fitting to a given noun (GV) and to generate words fitting to a given alphabetic character (GW). We used a blocked design with 80 measurements which consisted of 4 intervals of speech activation alternating with relaxation periods. The data were analysed on the basis of the general linear model using Brainvoyager registered . The activated clusters in the inferior frontal (Broca) and the posterior temporal (Wernicke) cortex were analysed and the laterality indices calculated. Results: In both groups the paradigms GV and GW activated the Broca's area very robustly. Visualisation of the Wernicke's area was best achieved by the paradigm GV. The paradigm EM did not reliably stimulate either the frontal or the temporal cortex. Frontal lateralisation was best determined by GW and GV, temporal lateralisation by GV. Conclusion: The paradigms GV and GW visualise two essential aspects of speech processing: semantic word processing and word production. In a clinical setting with brain tumour patients, both, GV and GW can be used to visualise frontal and temporal speech areas, and to determine speech dominance. (orig.)

  20. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  1. [Clinical study of post-stroke speech apraxia treated with scalp electric acupuncture under anatomic orientation and rehabilitation training].

    Science.gov (United States)

    Jiang, Yujuan; Yang, Yuxia; Xiang, Rong; Chang, E; Zhang, Yanchun; Zuo, Bingfang; Zhang, Qianwei

    2015-07-01

    To compare the differences in the clinical efficacy on post-stroke speech disorder between scalp electric acupuncture (EA) under anatomic orientation combined with rehabilitation training and simple rehabilitation training. Sixty patients of post-stroke speech apraxia were randomized into an observation group and a control group, 30 cases in each one. In the observation group, under anatomic orientation, the scalp EA was adopted to the dominant hemisphere Broca area on the left cerebrum. Additionally, the speech rehabilitation training was combined. In the control group, the speech rehabilitation training was simply,used. The treatment lasted for 4 weeks totally. The speech movement program module in the psychological language assessment and treatment system of Chinese aphasia was used for the evident of efficacy assessment. The scores of counting, singing scale, repeating phonetic alphabet, repeating monosyllable and repeating disyllable were observed in the patients of the two groups. The assessment was done separately on the day of grouping and 4 weeks after treatment. In 4 weeks of treatment, the scores of counting, singing scale, repeating phonetic alphabet, repeating monosyllable and repeating disyllable were all improved as compared with those before treatment in the two groups (all Pspeech rehabilitation training obviously improves speech apraxia in stroke patients so that the speech disorder cani be relieved. The efficacy is better than that in simple rehabilitation training.

  2. Perceived liveliness and speech comprehensibility in aphasia : the effects of direct speech in auditory narratives

    NARCIS (Netherlands)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in 'healthy' communication direct speech constructions contribute to the liveliness, and indirectly to

  3. Speech and non-speech audio-visual illusions: a developmental study.

    Directory of Open Access Journals (Sweden)

    Corinne Tremblay

    Full Text Available It is well known that simultaneous presentation of incongruent audio and visual stimuli can lead to illusory percepts. Recent data suggest that distinct processes underlie non-specific intersensory speech as opposed to non-speech perception. However, the development of both speech and non-speech intersensory perception across childhood and adolescence remains poorly defined. Thirty-eight observers aged 5 to 19 were tested on the McGurk effect (an audio-visual illusion involving speech, the Illusory Flash effect and the Fusion effect (two audio-visual illusions not involving speech to investigate the development of audio-visual interactions and contrast speech vs. non-speech developmental patterns. Whereas the strength of audio-visual speech illusions varied as a direct function of maturational level, performance on non-speech illusory tasks appeared to be homogeneous across all ages. These data support the existence of independent maturational processes underlying speech and non-speech audio-visual illusory effects.

  4. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  5. Speech and Language Skills of Parents of Children with Speech Sound Disorders

    Science.gov (United States)

    Lewis, Barbara A.; Freebairn, Lisa A.; Hansen, Amy J.; Miscimarra, Lara; Iyengar, Sudha K.; Taylor, H. Gerry

    2007-01-01

    Purpose: This study compared parents with histories of speech sound disorders (SSD) to parents without known histories on measures of speech sound production, phonological processing, language, reading, and spelling. Familial aggregation for speech and language disorders was also examined. Method: The participants were 147 parents of children with…

  6. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  7. E-learning-based speech therapy: a web application for speech training.

    NARCIS (Netherlands)

    Beijer, L.J.; Rietveld, T.C.; Beers, M.M. van; Slangen, R.M.; Heuvel, H. van den; Swart, B.J.M. de; Geurts, A.C.H.

    2010-01-01

    Abstract In The Netherlands, a web application for speech training, E-learning-based speech therapy (EST), has been developed for patients with dysarthria, a speech disorder resulting from acquired neurological impairments such as stroke or Parkinson's disease. In this report, the EST infrastructure

  8. Developmental apraxia of speech in children. Quantitive assessment of speech characteristics

    NARCIS (Netherlands)

    Thoonen, G.H.J.

    1998-01-01

    Developmental apraxia of speech (DAS) in children is a speech disorder, supposed to have a neurological origin, which is commonly considered to result from particular deficits in speech processing (i.e., phonological planning, motor programming). However, the label DAS has often been used as

  9. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  10. The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease

    Science.gov (United States)

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-01-01

    Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…

  11. A neural mechanism for recognizing speech spoken by different speakers

    NARCIS (Netherlands)

    Kreitewolf, Jens; Gaudrain, Etienne; von Kriegstein, Katharina

    2014-01-01

    Understanding speech from different speakers is a sophisticated process, particularly because the same acoustic parameters convey important information about both the speech message and the person speaking. How the human brain accomplishes speech recognition under such conditions is unknown. One

  12. Genetics Home Reference: FOXP2-related speech and language disorder

    Science.gov (United States)

    ... individuals have a speech problem known as childhood apraxia of speech, which makes it difficult to produce sequences of ... the lips, mouth, and tongue. Children with childhood apraxia of speech typically say their first words later than other ...

  13. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    interferers [2]. However, the model fails in the case of phase jitter distortion, in which the spectral structure of speech is affected but the temporal envelope is maintained. This suggests that an across audio-frequency mechanism is required to account for this distortion. It is demonstrated that a measure...... of the across audio-frequency variance at the output of the modulation-frequency selective process in the model is sufficient to account for the phase jitter distortion. Thus, a joint spectro-temporal modulation analysis, as proposed in [3], does not seem to be required. The results are consistent with concepts...

  14. Indonesian Automatic Speech Recognition For Command Speech Controller Multimedia Player

    Directory of Open Access Journals (Sweden)

    Vivien Arief Wardhany

    2014-12-01

    Full Text Available The purpose of multimedia devices development is controlling through voice. Nowdays voice that can be recognized only in English. To overcome the issue, then recognition using Indonesian language model and accousticc model and dictionary. Automatic Speech Recognizier is build using engine CMU Sphinx with modified english language to Indonesian Language database and XBMC used as the multimedia player. The experiment is using 10 volunteers testing items based on 7 commands. The volunteers is classifiedd by the genders, 5 Male & 5 female. 10 samples is taken in each command, continue with each volunteer perform 10 testing command. Each volunteer also have to try all 7 command that already provided. Based on percentage clarification table, the word “Kanan” had the most recognize with percentage 83% while “pilih” is the lowest one. The word which had the most wrong clarification is “kembali” with percentagee 67%, while the word “kanan” is the lowest one. From the result of Recognition Rate by male there are several command such as “Kembali”, “Utama”, “Atas “ and “Bawah” has the low Recognition Rate. Especially for “kembali” cannot be recognized as the command in the female voices but in male voice that command has 4% of RR this is because the command doesn’t have similar word in english near to “kembali” so the system unrecognize the command. Also for the command “Pilih” using the female voice has 80% of RR but for the male voice has only 4% of RR. This problem is mostly because of the different voice characteristic between adult male and female which male has lower voice frequencies (from 85 to 180 Hz than woman (165 to 255 Hz.The result of the experiment showed that each man had different number of recognition rate caused by the difference tone, pronunciation, and speed of speech. For further work needs to be done in order to improving the accouracy of the Indonesian Automatic Speech Recognition system

  15. Co-Roman domination in graphs

    Indian Academy of Sciences (India)

    2Department of Computer Science, Liverpool Hope University, Liverpool, UK. 3Department of Computer .... such as Roman domination, weak Roman domination and secure domination, which have been investigated by ... A foolproof secure dominating function (FSDF) is a safe function f = (V0,V1) such that for each u ∈ V0 ...

  16. Spontaneous speech: Quantifying daily communication in Spanish-speaking individuals with aphasia.

    Directory of Open Access Journals (Sweden)

    Silvia Martínez-Ferreiro

    2015-04-01

    Full Text Available Observable disruptions in spontaneous speech are among the most prominent characteristics of aphasia. The potential of language production analyses in discourse contexts to reveal subtle language deficits has been progressively exploited, becoming essential for diagnosing language disorders (Vermeulen et al., 1989; Goodglass et al., 2000; Prins and Bastiaanse, 2004; Jaecks et al., 2012. Based on previous studies, short and/or fragmentary utterances, and consequently a shorter MLU, are expected in the speech of individuals with aphasia, together with a large proportions of incomplete sentences and a limited use of embeddings. Fewer verbs with a lower diversity (lower type/token ratio and fewer internal arguments are also predicted, as well as a low proportion of inflected verbs (Bastiaanse and Jonkers, 1998. However, this profile comes mainly from the study of individuals with prototypical aphasia types, mainly Broca’s aphasia, raising the question of how accurate spontaneous speech is to pinpoint deficits in individuals with less clear diagnoses. To address this question, we present the results of a spontaneous speech analysis of 25 Spanish-speaking subjects: 10 individuals with aphasia (IWAs, 7 male and 3 female (mean age: 64.2 in neural stable condition (> 1 year post-onset who suffered from a single CVA in the left hemisphere (Rosell, 2005, and 15 non-brain-damaged matched speakers (NBDs. In the aphasia group, 7 of the participants were diagnosed as non-fluent (1 motor aphasia, 4 transcortical motor aphasia or motor aphasia with signs of transcorticality, 2 mixed aphasia with motor predominance, and 3 of them as fluent (mixed aphasia with anomic predominance. The protocol for data collection included semi-standardized interviews, in which participants were asked 3 questions evoking past, present, and future events (last job, holidays, and hobbies. 300 words per participant were analyzed. The MLU over the total 300 words revealed a decreased

  17. Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech

    OpenAIRE

    劉, 麗清

    2015-01-01

    Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...

  18. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    OpenAIRE

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2014-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it...

  19. Speech Planning Happens before Speech Execution: Online Reaction Time Methods in the Study of Apraxia of Speech

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa

    2012-01-01

    Purpose: The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Method: Following a brief…

  20. Perceptual centres in speech - an acoustic analysis

    Science.gov (United States)

    Scott, Sophie Kerttu

    Perceptual centres, or P-centres, represent the perceptual moments of occurrence of acoustic signals - the 'beat' of a sound. P-centres underlie the perception and production of rhythm in perceptually regular speech sequences. P-centres have been modelled both in speech and non speech (music) domains. The three aims of this thesis were toatest out current P-centre models to determine which best accounted for the experimental data bto identify a candidate parameter to map P-centres onto (a local approach) as opposed to the previous global models which rely upon the whole signal to determine the P-centre the final aim was to develop a model of P-centre location which could be applied to speech and non speech signals. The first aim was investigated by a series of experiments in which a) speech from different speakers was investigated to determine whether different models could account for variation between speakers b) whether rendering the amplitude time plot of a speech signal affects the P-centre of the signal c) whether increasing the amplitude at the offset of a speech signal alters P-centres in the production and perception of speech. The second aim was carried out by a) manipulating the rise time of different speech signals to determine whether the P-centre was affected, and whether the type of speech sound ramped affected the P-centre shift b) manipulating the rise time and decay time of a synthetic vowel to determine whether the onset alteration was had more affect on P-centre than the offset manipulation c) and whether the duration of a vowel affected the P-centre, if other attributes (amplitude, spectral contents) were held constant. The third aim - modelling P-centres - was based on these results. The Frequency dependent Amplitude Increase Model of P-centre location (FAIM) was developed using a modelling protocol, the APU GammaTone Filterbank and the speech from different speakers. The P-centres of the stimuli corpus were highly predicted by attributes of