WorldWideScience

Sample records for paresthesias speech disturbance

  1. Paresthesia and sensory disturbances associated with 2009 pandemic vaccine receipt: Clinical features and risk factors.

    Science.gov (United States)

    De Serres, Gaston; Rouleau, Isabelle; Skowronski, Danuta M; Ouakki, Manale; Lacroix, Kevin; Bédard, Fernand; Toth, Eveline; Landry, Monique; Dupré, Nicolas

    2015-08-26

    Paresthesia was the third-most-common adverse event following immunization (AEFI) with 2009 monovalent AS03-adjuvanted A(H1N1)pdm09 vaccine in Quebec, Canada and was also frequently reported in Europe. This study assessed clinical features and risk factors associated with this unexpected AEFI. Reports to the passive surveillance system were summarized. A case-control study was conducted to assess risk factors and additional investigations were undertaken among cases with symptoms persisting ≥12 months. There were 328 reports of paresthesia affecting the vaccinated arm (58%), but also face (45%), lower limbs (40%) and back/thorax (23%) with numbness but also muscle weakness (61%), motor impairment (61%), generalized myalgia (37%), visual (14%) and/or speech effects (15%). Reporting rate was highest in women of reproductive age, peaking at 30-39 years-old (28/100,000 doses administered) and exceeding that of men of the same age (7/100,000 doses) by 4-fold. Median time to onset was 2h. Symptoms subsided within one week in 37% but lasted ≥6 months in 26%. No consistent or objective neurological findings were identified. Risk was increased with allergy history, respiratory illness the day of vaccination, depressive symptoms and family history of pulmonary disease, but decreased with physical activity the day of vaccination, and regular weekly alcohol consumption. Paresthesia following 2009 pandemic vaccine receipt lasted several weeks and included other motor-sensory disturbances in an important subset of patients. Although it does not correspond with known neurological disease, and causality remains uncertain, further investigation is warranted to understand the nature and frequency of paresthesia as a possible AEFI with influenza vaccines. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Mental nerve paresthesia secondary to initiation of endodontic therapy: a case report

    Science.gov (United States)

    Alam, Sharique; Zia, Afaf; Khan, Masood Hasan; Kumar, Ashok

    2014-01-01

    Whenever endodontic therapy is performed on mandibular posterior teeth, damage to the inferior alveolar nerve or any of its branches is possible. Acute periapical infection in mandibular posterior teeth may also sometimes disturb the normal functioning of the inferior alveolar nerve. The most common clinical manifestation of these insults is the paresthesia of the inferior alveolar nerve or mental nerve paresthesia. Paresthesia usually manifests as burning, prickling, tingling, numbness, itching or any deviation from normal sensation. Altered sensation and pain in the involved areas may interfere with speaking, eating, drinking, shaving, tooth brushing and other events of social interaction which will have a disturbing impact on the patient. Paresthesia can be short term, long term or even permanent. The duration of the paresthesia depends upon the extent of the nerve damage or persistence of the etiology. Permanent paresthesia is the result of nerve trunk laceration or actual total nerve damage. Paresthesia must be treated as soon as diagnosed to have better treatment outcomes. The present paper describes a case of mental nerve paresthesia arising after the start of the endodontic therapy in left mandibular first molar which was managed successfully by conservative treatment. PMID:25110646

  3. Paresthesia

    Science.gov (United States)

    ... Page You are here Home » Disorders » All Disorders Paresthesia Information Page Paresthesia Information Page What research is being done? The ... spinal cord, and peripheral nerves that can cause paresthesia. The goals of this research are to increase ...

  4. Mental nerve paresthesia secondary to initiation of endodontic therapy: a case report

    OpenAIRE

    Andrabi, Syed Mukhtar-Un-Nisar; Alam, Sharique; Zia, Afaf; Khan, Masood Hasan; Kumar, Ashok

    2014-01-01

    Whenever endodontic therapy is performed on mandibular posterior teeth, damage to the inferior alveolar nerve or any of its branches is possible. Acute periapical infection in mandibular posterior teeth may also sometimes disturb the normal functioning of the inferior alveolar nerve. The most common clinical manifestation of these insults is the paresthesia of the inferior alveolar nerve or mental nerve paresthesia. Paresthesia usually manifests as burning, prickling, tingling, numbness, itch...

  5. Speech and Language Disturbances in Neurology Practice

    Directory of Open Access Journals (Sweden)

    Oğuz Tanrıdağ

    2009-12-01

    Full Text Available Despite the well-known facts discerned from interesting cases of speech and language disturbances over thousands of years, the scientific background and the limitless discussions for nearly 150 years, this field has been considered one of the least important subjects in neurological sciences. In this review, we first analyze the possible causes for this “stepchild” attitude towards this subject and we then summarize the practical aspects concerning speech and language disturbances. Our underlying expectation with this review is to explain the facts concerning those disturbances that might offer us opportunities to better understand the nervous system and the affected patients

  6. Paresthesia-Independence: An Assessment of Technical Factors Related to 10 kHz Paresthesia-Free Spinal Cord Stimulation.

    Science.gov (United States)

    De Carolis, Giuliano; Paroli, Mery; Tollapi, Lara; Doust, Matthew W; Burgher, Abram H; Yu, Cong; Yang, Thomas; Morgan, Donna M; Amirdelfan, Kasra; Kapural, Leonardo; Sitzman, B Todd; Bundschu, Richard; Vallejo, Ricardo; Benyamin, Ramsin M; Yearwood, Thomas L; Gliner, Bradford E; Powell, Ashley A; Bradley, Kerry

    2017-05-01

    Spinal cord stimulation (SCS) has been successfully used to treat chronic intractable pain for over 40 years. Successful clinical application of SCS is presumed to be generally dependent on maximizing paresthesia-pain overlap; critical to achieving this is positioning of the stimulation field at the physiologic midline. Recently, the necessity of paresthesia for achieving effective relief in SCS has been challenged by the introduction of 10 kHz paresthesia-free stimulation. In a large, prospective, randomized controlled pivotal trial, HF10 therapy was demonstrated to be statistically and clinically superior to paresthesia-based SCS in the treatment of severe chronic low back and leg pain. HF10 therapy, unlike traditional paresthesia-based SCS, requires no paresthesia to be experienced by the patient, nor does it require paresthesia mapping at any point during lead implant or post-operative programming. To determine if pain relief was related to technical factors of paresthesia, we measured and analyzed the paresthesia responses of patients successfully using HF10 therapy. Prospective, multicenter, non-randomized, non-controlled interventional study. Outpatient pain clinic at 10 centers across the US and Italy. Patients with both back and leg pain already implanted with an HF10 therapy device for up to 24 months were included in this multicenter study. Patients provided pain scores prior to and after using HF10 therapy. Each patient's most efficacious HF10 therapy stimulation program was temporarily modified to a low frequency (LF; 60 Hz), wide pulse width (~470 mus), paresthesia-generating program. On a human body diagram, patients drew the locations of their chronic intractable pain and, with the modified program activated, all regions where they experienced LF paresthesia. Paresthesia and pain drawings were then analyzed to estimate the correlation of pain relief outcomes to overlap of pain by paresthesia, and the mediolateral distribution of paresthesia (as a

  7. The role of intraoperative positioning of the inferior alveolar nerve on postoperative paresthesia after bilateral sagittal split osteotomy of the mandible: prospective clinical study.

    Science.gov (United States)

    Hanzelka, T; Foltán, R; Pavlíková, G; Horká, E; Sedý, J

    2011-09-01

    Bilateral sagittal split osteotomy (BSSO) aims to correct congenital or acquired mandibular abnormities. Temporary or permanent neurosensory disturbance is the most frequent complication of BSSO. To evaluate the influence of IAN handling during osteotomy, the authors undertook a prospective study in 290 patients who underwent BSSO. The occurrence and duration of paresthesia was evaluated 4 weeks, 3 months, 6 months, and 1 year after surgery. Paresthesia developed immediately after surgery in almost half of the patients. Most cases of paresthesia resolved within 1 year after surgery. A significantly higher prevalence of paresthesia was observed on the left side. The authors found a correlation between the type of IAN position between the left and right side. The type of split (and IAN exposure) did not have a significant effect on the occurrence or duration of neurosensory disturbance of the IAN. The authors did not find a correlation between the occurrence and duration of paresthesia and the direction of BSSO. Mandibular hypoplasia or mandibular progenia did not represent a predisposition for the development of paresthesia. In the development of IAN paresthesia, the type of IAN exposure and the split is less important than the side on which the split is carried out. Copyright © 2011 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Comprehensive visualization of paresthesia in breast cancer survivors.

    Science.gov (United States)

    Jud, Sebastian M; Hatko, Reinhard; Maihöfner, Christian; Bani, Mayada R; Schrauder, Michael G; Lux, Michael P; Beckmann, Matthias W; Bani, Gassan; Eder, Irina; Fasching, Peter A; Loehberg, Christian R; Rauh, Claudia; Hein, Alexander

    2014-07-01

    As breast cancer survivors are benefiting increasingly from advanced forms of therapy, the side effects of locoregional treatment in the adjuvant setting are becoming more and more important. This article presents a new method of assessing the spatial distribution of paresthesia in breast cancer survivors after different locoregional treatments. A structured questionnaire assessing paresthesia, with body pictograms for marking paresthesia areas, was completed by 343 breast cancer survivors. The image information was digitized, generating gray-scale summation images with numbers from 0, indicating black (100 % of the patients had paresthesia), to 255, indicating white (none had paresthesia). The resulting map visualization showed the locations of paresthesia on body pictograms. The group included patients who had undergone breast-conserving surgery (BCS) and mastectomy, and also patients who had received percutaneous and interstitial radiation. A total of 56.5 % of the patients stated that they had paresthesia. The paresthesia areas were distributed within the range suggested by clinical experience. Most patients stated that they had paresthesia in the upper outer quadrant and axilla. Patients who had undergone mastectomy or percutaneous radiotherapy appeared to have more paresthesia on some areas of the body surface. Patients who had undergone mastectomy indicated larger areas of paresthesia than those with BCS-4,066 pixels (px) vs. 2,275 px. Radiotherapy did not appear to influence the spatial distribution of paresthesia. Paresthesia is a common symptom after breast cancer treatment. This paper describes a new method of assessing this side effect to improve and individualize treatment for it in the future.

  9. Mountaineering-induced bilateral plantar paresthesia.

    Science.gov (United States)

    Henderson, Kyle K; Parker, Justine; Heinking, Kurt P

    2014-07-01

    Flat feet (pes planus) have been implicated in multiple musculoskeletal complaints, which are often exacerbated by lack of appropriate arch support or intense exercise. To investigate the efficacy of osteopathic manipulative treatment (OMT) on a patient (K.K.H.) with mountaineering-induced bilateral plantar paresthesia and to assess the association of pes planus with paresthesia in members of the mountaineering expedition party that accompanied the patient. A patient history and physical examination of the musculoskeletal system were performed. The hindfoot, midfoot, forefoot, big toe, and distal toes were evaluated for neurologic function, specifically pin, vibration, 10-g weight sensitivity, and 2-point discrimination during the 4-month treatment period. To determine if OMT could augment recovery, the patient volunteered to use the contralateral leg as a control, with no OMT performed on the sacrum or lower back. To determine if pes planus was associated with mountaineering-induced paresthesia, a sit-to-stand navicular drop test was performed on members of the expedition party. Osteopathic manipulative treatment improved fibular head motion and muscular flexibility and released fascial restrictions of the soleus, hamstring, popliteus, and gastrocnemius. The patient's perception of stiffness, pain, and overall well-being improved with OMT. However, OMT did not shorten the duration of paresthesia. Of the 9 expedition members, 2 experienced paresthesia. Average navicular drop on standing was 5.1 mm for participants with no paresthesia vs 8.9 mm for participants with paresthesia (t test, Pparesthesia. Early diagnosis of pes planus and treatment with orthotics (which may prevent neuropathies)--or, less ideally, OMT after extreme exercise--should be sought to relieve tension and discomfort. © 2014 The American Osteopathic Association.

  10. Effect of infrared laser in the prevention and treatment of paresthesia in orthognathic surgery.

    Science.gov (United States)

    Prazeres, Lady Dayane Kalline Travassos; Muniz, Yuri Victor Siqueira; Barros, Keylla Marinho Albuquerque; Gerbi, Marleny Elizabeth Marquez de Martinez; Laureano Filho, José Rodrigues

    2013-05-01

    Orthognathic surgery is the surgical procedure that makes correcting deformities of the bones in the region of the maxilla and mandible a reality in the Brazilian dentistry. However, this type of surgery usually involves paresthesia in the postoperative period, concerning the surgeons who perform them and generating discomfort to patients. This study aimed at evaluating the effect of infrared laser (830 nm) in the prevention and treatment of paresthesias after orthognathic surgery. Six patients underwent orthognathic surgery: the experimental group composed of 4 patients and the control group that did not receive laser therapy composed of 2 patients. The experimental group received laser applications during the transoperative and 12 postoperative sessions. Tests for mechanical (deep and shallow) and thermal (cold) sensitivity were performed in the preoperative and postoperative period (during 12 sessions) in the lip and chin areas by the same operator. The paresthesia was classified into 1, strong; 2, moderate; 3, mild; and 4, absent, through the patient's response to stimuli. The results showed that all patients had no disturbance of sensitivity in the preoperative period, but paresthesia was presented at various levels in the postoperative period. Both groups showed recovery of deep mechanical sensitivity within a shorter time interval compared with the superficial mechanical and thermal sensitivity. However, at the 12th assessment, patients who underwent the laser therapy showed better reduction in the level of paresthesia or even complete regression of this. The laser, therefore, brought benefits to the treatment of paresthesia, accelerating the return of neurosensorial sensitivity.

  11. Endodontic-related facial paresthesia: systematic review.

    Science.gov (United States)

    Alves, Flávio R; Coutinho, Mariana S; Gonçalves, Lucio S

    2014-01-01

    Paresthesia is a neurosensitivity disorder caused by injury to the neural tissue. It is characterized by a burning or twinging sensation or by partial loss of local sensitivity. Paresthesia related to endodontic treatment can occur because of extravasation of filling material or the intracanal dressing, as a consequence of periapical surgery or because of periapical infection. A literature review of paresthesia in endodontics was undertaken, with a view to identifying and discussing the most commonly affected nerves, the diagnostic process and the treatment options. Among reported cases, the most commonly affected nerves were those passing through the jaw: the inferior alveolar nerve, the mental nerve and the lingual nerve. To diagnose paresthesia, the endodontist must carry out a complete medical history, panoramic and periapical radiography, and (in some cases) computed tomography, as well as mechanoceptive and nociceptive tests. To date, no specific treatment for endodontic-related paresthesia has been described in the literature, since the problem may be related to a variety of causes.

  12. Effect of Acupuncture on Post-implant Paresthesia

    Directory of Open Access Journals (Sweden)

    Crischina Branco Marques Sant’Anna

    2017-04-01

    Full Text Available Paresthesia is defined as an alteration in local sensibility, associated with numbness, tingling, or unpleasant sensations caused by nerve lesions or irritation. It can be temporary or permanent. The treatment protocol for facial paresthesia is primarily based on the use of drugs and implant removal, which may not be completely effective or may require other risk exposure when there is no spontaneous regression. However, other therapeutic modalities such as acupuncture can be used. The aim of this study is to report a case of a patient with paresthesia of the inferior alveolar nerve and pain caused by an implant surgery performed 2 years earlier. The patient received acupuncture treatment during 4 months of weekly sessions. Six points were used: Large Intestine (LI4, Large Intestine (LI11, Stomach (ST36, Liver (LR3, Extra Head and Neck (E-HN-18, and Stomach (ST5. The visual analog scale was used before and after each session for the analysis of paresthesia and pain, together with assessment of the paresthesia by delimitation of the desensitized region of the skin and presented discomfort. Pain remission and reduction in the size of the paresthesia area occurred after four sessions.

  13. Effect of Acupuncture on Post-implant Paresthesia.

    Science.gov (United States)

    Sant'Anna, Crischina Branco Marques; Zuim, Paulo Renato Junqueira; Brandini, Daniela Atili; Guiotti, Aimée Maria; Vieira, Joao Batista; Turcio, Karina Helga Leal

    2017-04-01

    Paresthesia is defined as an alteration in local sensibility, associated with numbness, tingling, or unpleasant sensations caused by nerve lesions or irritation. It can be temporary or permanent. The treatment protocol for facial paresthesia is primarily based on the use of drugs and implant removal, which may not be completely effective or may require other risk exposure when there is no spontaneous regression. However, other therapeutic modalities such as acupuncture can be used. The aim of this study is to report a case of a patient with paresthesia of the inferior alveolar nerve and pain caused by an implant surgery performed 2 years earlier. The patient received acupuncture treatment during 4 months of weekly sessions. Six points were used: Large Intestine (LI4), Large Intestine (LI11), Stomach (ST36), Liver (LR3), Extra Head and Neck (E-HN-18), and Stomach (ST5). The visual analog scale was used before and after each session for the analysis of paresthesia and pain, together with assessment of the paresthesia by delimitation of the desensitized region of the skin and presented discomfort. Pain remission and reduction in the size of the paresthesia area occurred after four sessions. Copyright © 2017 Medical Association of Pharmacopuncture Institute. Published by Elsevier B.V. All rights reserved.

  14. Descriptive study of 192 adults with speech and language disturbances

    Directory of Open Access Journals (Sweden)

    Letícia Lessa Mansur

    Full Text Available CONTEXT: Aphasia is a very disabling condition caused by neurological diseases. In Brazil, we have little data on the profile of aphasics treated in rehabilitation centers. OBJECTIVE: To present a descriptive study of 192 patients, providing a reference sample of speech and language disturbances among Brazilians. DESIGN: Retrospective study. SETTING: Speech Pathology Unit linked to the Neurology Division of the Hospital das Clínicas of the Faculdade de Medicina da Universidade de São Paulo. SAMPLE: All patients (192 referred to our Speech Pathology service from 1995 to 2000. PROCEDURES: We collected data relating to demographic variables, etiology, language evaluation (functional evaluation, Boston Diagnostic Aphasia Examination, Boston Naming and Token Test, and neuroimaging studies. MAIN MEASUREMENTS: The results obtained in language tests and the clinical and neuroimaging data were organized and classified. Seventy aphasics were chosen for constructing a profile. Fourteen subjects with left single-lobe dysfunction were analyzed in detail. Seventeen aphasics were compared with 17 normal subjects, all performing the Token Test. RESULTS: One hundred subjects (52% were men and 92 (48% women. Their education varied from 0 to 16 years (average: 6.5; standard deviation: 4.53. We identified the lesion sites in 104 patients: 89% in the left hemisphere and 58% due to stroke. The incidence of aphasia was 70%; dysarthria and apraxia, 6%; functional alterations in communication, 17%; and 7% were normal. Statistically significant differences appeared when comparing the subgroup to controls in the Token Test. CONCLUSIONS: We believe that this sample contributes to a better understanding of neurological patients with speech and language disturbances and may be useful as a reference for health professionals involved in the rehabilitation of such disorders.

  15. Does a paresthesia during spinal needle insertion indicate intrathecal needle placement?

    Science.gov (United States)

    Pong, Ryan P; Gmelch, Benjamin S; Bernards, Christopher M

    2009-01-01

    Paresthesias are relatively common during spinal needle insertion, however, the clinical significance of the paresthesia is unknown. A paresthesia may result from needle-to-nerve contact with a spinal nerve in the epidural space, or, with far lateral needle placement, may result from contact with a spinal nerve within the intervertebral foramen. However, it is also possible and perhaps more likely, that paresthesias occur when the spinal needle contacts a spinal nerve root within the subarachnoid space. This study was designed to test this latter hypothesis. Patients (n = 104) scheduled for surgery under spinal anesthesia were observed during spinal needle insertion. If a paresthesia occurred, the needle was fixed in place and the stylet removed to observe whether cerebrospinal fluid (CSF) flowed from the hub. The presence of CSF was considered proof that the needle had entered the subarachnoid space. Paresthesias occurred in 14/103 (13.6%) of patients; 1 patient experienced a paresthesia twice. All paresthesias were transient. Following a paresthesia, CSF was observed in the needle hub 86.7% (13/15) of the time. Our data suggest that the majority of transient paresthesias occur when the spinal needle enters the subarachnoid space and contacts a spinal nerve root. Therefore, when transient paresthesias occur during spinal needle placement it is appropriate to stop and assess for the presence of CSF in the needle hub, rather than withdraw and redirect the spinal needle away from the side of the paresthesia as some authors have suggested.

  16. Temporary Mental Nerve Paresthesia Originating from Periapical Infection

    OpenAIRE

    Genc Sen, Ozgur; Kaplan, Volkan

    2015-01-01

    Many systemic and local factors can cause paresthesia, and it is rarely caused by infections of dental origin. This report presents a case of mental nerve paresthesia caused by endodontic infection of a mandibular left second premolar. Resolution of the paresthesia began two weeks after conventional root canal treatment associated with antibiotic therapy and was completed in eight weeks. One year follow-up radiograph indicated complete healing of the radiolucent periapical lesion. The too...

  17. Temporary Mental Nerve Paresthesia Originating from Periapical Infection

    Science.gov (United States)

    Genc Sen, Ozgur; Kaplan, Volkan

    2015-01-01

    Many systemic and local factors can cause paresthesia, and it is rarely caused by infections of dental origin. This report presents a case of mental nerve paresthesia caused by endodontic infection of a mandibular left second premolar. Resolution of the paresthesia began two weeks after conventional root canal treatment associated with antibiotic therapy and was completed in eight weeks. One year follow-up radiograph indicated complete healing of the radiolucent periapical lesion. The tooth was asymptomatic and functional. PMID:26345692

  18. Paresthesias Among Community Members Exposed to the World Trade Center Disaster

    Science.gov (United States)

    Marmor, Michael; Shao, Yongzhao; Bhatt, D. Harshad; Stecker, Mark M.; Berger, Kenneth I.; Goldring, Roberta M.; Rosen, Rebecca L.; Caplan-Shaw, Caralee; Kazeros, Angeliki; Pradhan, Deepak; Wilkenfeld, Marc; Reibman, Joan

    2017-01-01

    Objective: Paresthesias can result from metabolic disorders, nerve entrapment following repetitive motions, hyperventilation pursuant to anxiety, or exposure to neurotoxins. We analyzed data from community members exposed to the World Trade Center (WTC) disaster of September 11, 2001, to evaluate whether exposure to the disaster was associated with paresthesias. Methods: Analysis of data from 3141 patients of the WTC Environmental Health Center. Results: Fifty-six percent of patients reported paresthesias at enrollment 7 to 15 years following the WTC disaster. After controlling for potential confounders, paresthesias were associated with severity of exposure to the WTC dust cloud and working in a job requiring cleaning of WTC dust. Conclusions: This study suggests that paresthesias were commonly associated with WTC-related exposures or post-WTC cleaning work. Further studies should objectively characterize these paresthesias and seek to identify relevant neurotoxins or paresthesia-inducing activities. PMID:28157767

  19. Topiramate-induced paresthesia is more frequently reported by migraine than epileptic patients.

    Science.gov (United States)

    Sedighi, Behnaz; Shafiei, Kaveh; Azizpour, Iman

    2016-04-01

    Topiramate is an approved and effective drug in migraine prophylaxis. Paresthesia is the most commonly reported side effect. The primary objective of this study was to compare the frequency of topiramate-induced paresthesia in migraine headache to epileptic patients. Patients with migraine without aura and epilepsy were enrolled in this observational study. All cases were interviewed by telephone about their history of paresthesia. Confounding factors were controlled through logistic regression. The odds ratio of developing topiramate-induced paresthesia in migraine compared to epilepsy patients was 3.4. Three factors were independent contributors to developing topiramate-induced paresthesia: female sex (odds ratio 2.1), topiramate dosage (odds ratio 0.3) and duration of therapy. Our findings indicate an independent association between migraine and development of paresthesia. Migraineurs were more likely than epileptic patients to report paresthesia as topiramate adverse effects. Female sex, treatment duration and topiramate dosage contribute significantly to subsequent development of paresthesia.

  20. Retrospective review of voluntary reports of nonsurgical paresthesia in dentistry.

    Science.gov (United States)

    Gaffen, Andrew S; Haas, Daniel A

    2009-10-01

    Paresthesia is an adverse event that may be associated with the administration of local anesthetics in dentistry. The purpose of this retrospective study was to analyze cases of paresthesia associated with local anesthetic injection that were voluntarily reported to Ontario"s Professional Liability Program (PLP) from 1999 to 2008 inclusive, to see if the findings were consistent with those from 1973 to 1998 from this same source. All cases of nonsurgical paresthesia reported from 1999 to 2008 were reviewed; cases involving surgical procedures were excluded. Variables examined included patient age and gender, type and volume of local anesthetic, anatomic site of nerve injury, affected side and pain on injection or any other symptoms. During the study period, 182 PLP reports of paresthesia following nonsurgical procedures were made; all but 2 were associated with mandibular block injection. There was no significant gender predilection, but the lingual nerve was affected more than twice as frequently as the inferior alveolar nerve. During 2006-2008 alone, 64 cases of nonsurgical paresthesia were reported to PLP, a reported incidence of 1 in 609,000 injections. For the 2 local anesthetic drugs available in dental cartridges as 4% solutions, i.e., articaine and prilocaine, the frequencies of reporting of paresthesia were significantly greater than expected (chi2, exact binomial distribution; p paresthesia.

  1. The Anatomical Nature of Dental Paresthesia: A Quick Review

    Science.gov (United States)

    Ahmad, Maha

    2018-01-01

    Dental paresthesia is loss of sensation caused by maxillary or mandibular anesthetic administration before dental treatment. This review examines inferior alveolar block paresthesia symptoms, side effect and complications. Understanding the anatomy of the pterygomandibular fossa will help in understanding the nature and causes of the dental paresthesia. In this review, we review the anatomy of the region surrounding inferior alveolar injections, anesthetic agents and also will look also into the histology and injury process of the inferior alveolar nerve. PMID:29541262

  2. Infection Related Inferior Alveolar Nerve Paresthesia in the Lower Premolar Teeth

    Directory of Open Access Journals (Sweden)

    Rachele Censi

    2016-01-01

    Full Text Available Introduction. The aim of this paper was to describe two cases of IAN infection-induced paresthesia and to discuss the most appropriate treatment solutions. Methods. For two patients, periapical lesions that induced IAN paresthesia were revealed. In the first case, the tooth was previously endodontically treated, whereas in the second case the lesion was due to pulp necrosis. Results. For the first patient, a progressive healing was observed only after the tooth extraction. In the second patient, the paresthesia had resolved after endodontic treatment. Conclusions. The endodontic-related paresthesia is a rare complication that can be the result of a combination of etiopathogenic mechanisms such as mechanical pressure on the nerve fibers due to the expanding infectious process and the production of microbial toxins. Paresthesia resulting from periapical lesions usually subsides through elimination of infection by root canal treatment. However, if there are no signs of enhancement, the immediate extraction of the tooth is the treatment of choice in order to prevent irreversible paresthesia because it was demonstrated that there is a correlation between the duration of mechanical or chemical irritation and the risk of permanent paresthesia.

  3. Infection Related Inferior Alveolar Nerve Paresthesia in the Lower Premolar Teeth

    Science.gov (United States)

    2016-01-01

    Introduction. The aim of this paper was to describe two cases of IAN infection-induced paresthesia and to discuss the most appropriate treatment solutions. Methods. For two patients, periapical lesions that induced IAN paresthesia were revealed. In the first case, the tooth was previously endodontically treated, whereas in the second case the lesion was due to pulp necrosis. Results. For the first patient, a progressive healing was observed only after the tooth extraction. In the second patient, the paresthesia had resolved after endodontic treatment. Conclusions. The endodontic-related paresthesia is a rare complication that can be the result of a combination of etiopathogenic mechanisms such as mechanical pressure on the nerve fibers due to the expanding infectious process and the production of microbial toxins. Paresthesia resulting from periapical lesions usually subsides through elimination of infection by root canal treatment. However, if there are no signs of enhancement, the immediate extraction of the tooth is the treatment of choice in order to prevent irreversible paresthesia because it was demonstrated that there is a correlation between the duration of mechanical or chemical irritation and the risk of permanent paresthesia. PMID:27597904

  4. Paresthesia during orthodontic treatment: case report and review.

    Science.gov (United States)

    Monini, André da Costa; Martins, Renato Parsekian; Martins, Isabela Parsekian; Martins, Lídia Parsekian

    2011-10-01

    Paresthesia of the lower lip is uncommon during orthodontic treatment. In the present case, paresthesia occurred during orthodontic leveling of an extruded mandibular left second molar. It was decided to remove this tooth from the appliance and allow it to relapse. A reanatomization was then performed by grinding. The causes and treatment options of this rare disorder are reviewed and discussed. The main cause of paresthesia during orthodontic treatment may be associated with contact between the dental roots and inferior alveolar nerve, which may be well observed on tomography scans. Treatment usually involves tooth movement in the opposite direction of the cause of the disorder.

  5. The prognosis of self-reported paresthesia and weakness in disc-related sciatica.

    Science.gov (United States)

    Grøvle, L; Haugen, A J; Natvig, B; Brox, J I; Grotle, M

    2013-11-01

    To explore how patients with sciatica rate the 'bothersomeness' of paresthesia (tingling and numbness) and weakness as compared with leg pain during 2 years of follow-up. Observational cohort study including 380 patients with sciatica and lumbar disc herniation referred to secondary care. Using the Sciatica Bothersomeness Index paresthesia, weakness and leg pain were rated on a scale from 0 to 6. A symptom score of 4-6 was defined as bothersome. Along with leg pain, the bothersomeness of paresthesia and weakness both improved during follow-up. Those who received surgery (n = 121) reported larger improvements in both symptoms than did those who were treated without surgery. At 2 years, 18.2% of the patients reported bothersome paresthesia, 16.6% reported bothersome leg pain, and 11.5% reported bothersome weakness. Among patients with no or little leg pain, 6.7% reported bothersome paresthesia and 5.1% bothersome weakness. During 2 years of follow-up, patients considered paresthesia more bothersome than weakness. At 2 years, the percentage of patients who reported bothersome paresthesia was similar to the percentage who reported bothersome leg pain. Based on patients' self-report, paresthesia and weakness are relevant aspects of disc-related sciatica.

  6. Treatment of traumatic infra orbital nerve paresthesia

    OpenAIRE

    Lone, Parveen Akhter; Singh, R. K.; Pal, U. S.

    2012-01-01

    This study was done to find out the role of topiramate therapy in infraorbital nerve paresthesia after miniplate fixation in zygomatic complxex fractures. A total 2 cases of unilateral zygomatic complex fracture, 2-3 weeks old with infra orbital nerve paresthesia were slected. Open reduction and plating was done in frontozygomaticregion. Antiepileptic drug tab topiramate was given in therapeutic doses and dose was increased slowly until functional recovery was noticed.

  7. Treatment of traumatic infra orbital nerve paresthesia

    Science.gov (United States)

    Lone, Parveen Akhter; Singh, R. K.; Pal, U. S.

    2012-01-01

    This study was done to find out the role of topiramate therapy in infraorbital nerve paresthesia after miniplate fixation in zygomatic complxex fractures. A total 2 cases of unilateral zygomatic complex fracture, 2-3 weeks old with infra orbital nerve paresthesia were slected. Open reduction and plating was done in frontozygomaticregion. Antiepileptic drug tab topiramate was given in therapeutic doses and dose was increased slowly until functional recovery was noticed. PMID:23833503

  8. Effect of Acupuncture on Post-implant Paresthesia

    OpenAIRE

    Sant’Anna, Crischina Branco Marques; Zuim, Paulo Renato Junqueira; Brandini, Daniela Atili; Guiotti, Aimée Maria; Vieira, Joao Batista; Turcio, Karina Helga Leal

    2017-01-01

    Paresthesia is defined as an alteration in local sensibility, associated with numbness, tingling, or unpleasant sensations caused by nerve lesions or irritation. It can be temporary or permanent. The treatment protocol for facial paresthesia is primarily based on the use of drugs and implant removal, which may not be completely effective or may require other risk exposure when there is no spontaneous regression. However, other therapeutic modalities such as acupuncture can be used. The aim of...

  9. Clinical Paresthesia Atlas Illustrates Likelihood of Coverage Based on Spinal Cord Stimulator Electrode Location.

    Science.gov (United States)

    Taghva, Alexander; Karst, Edward; Underwood, Paul

    2017-08-01

    Concordant paresthesia coverage is an independent predictor of pain relief following spinal cord stimulation (SCS). Using aggregate data, our objective is to produce a map of paresthesia coverage as a function of electrode location in SCS. This retrospective analysis used x-rays, SCS programming data, and paresthesia coverage maps from the EMPOWER registry of SCS implants for chronic neuropathic pain. Spinal level of dorsal column stimulation was determined by x-ray adjudication and active cathodes in patient programs. Likelihood of paresthesia coverage was determined as a function of stimulating electrode location. Segments of paresthesia coverage were grouped anatomically. Fisher's exact test was used to identify significant differences in likelihood of paresthesia coverage as a function of spinal stimulation level. In the 178 patients analyzed, the most prevalent areas of paresthesia coverage were buttocks, anterior and posterior thigh (each 98%), and low back (94%). Unwanted paresthesia at the ribs occurred in 8% of patients. There were significant differences in the likelihood of achieving paresthesia, with higher thoracic levels (T5, T6, and T7) more likely to achieve low back coverage but also more likely to introduce paresthesia felt at the ribs. Higher levels in the thoracic spine were associated with greater coverage of the buttocks, back, and thigh, and with lesser coverage of the leg and foot. This paresthesia atlas uses real-world, aggregate data to determine likelihood of paresthesia coverage as a function of stimulating electrode location. It represents an application of "big data" techniques, and a step toward achieving personalized SCS therapy tailored to the individual's chronic pain. © 2017 International Neuromodulation Society.

  10. Spinal cord stimulation paresthesia and activity of primary afferents.

    Science.gov (United States)

    North, Richard B; Streelman, Karen; Rowland, Lance; Foreman, P Jay

    2012-10-01

    A patient with failed back surgery syndrome reported paresthesia in his hands and arms during a spinal cord stimulation (SCS) screening trial with a low thoracic electrode. The patient's severe thoracic stenosis necessitated general anesthesia for simultaneous decompressive laminectomy and SCS implantation for chronic use. Use of general anesthesia gave the authors the opportunity to characterize the patient's unusual distribution of paresthesia. During SCS implantation, they recorded SCS-evoked antidromic potentials at physiologically relevant amplitudes in the legs to guide electrode placement and in the arms as controls. Stimulation of the dorsal columns at T-8 evoked potentials in the legs (common peroneal nerves) and at similar thresholds, consistent with the sensation of paresthesia in the arms, in the right ulnar nerve. The authors' electrophysiological observations support observations by neuroanatomical specialists that primary afferents can descend several (in this case, at least 8) vertebral segments in the spinal cord before synapsing or ascending. This report thus confirms a physiological basis for unusual paresthesia distribution associated with thoracic SCS.

  11. Endodontic periapical lesion-induced mental nerve paresthesia

    Science.gov (United States)

    Shadmehr, Elham; Shekarchizade, Neda

    2015-01-01

    Paresthesia is a burning or prickling sensation or partial numbness, resulting from neural injury. The symptoms can vary from mild neurosensory dysfunction to total loss of sensation in the innervated area. Only a few cases have described apical periodontitis to be the etiological factor of impaired sensation in the area innervated by the inferior alveolar and mental nerves. The aim of the present paper is to report a case of periapical lesion-induced paresthesia in the innervation area of the mental nerve, which was successfully treated with endodontic retreatment. PMID:25878687

  12. Reducing Adverse Effects During Drug Development: The Example of Lesogaberan and Paresthesia.

    Science.gov (United States)

    Rydholm, Hans; von Corswant, Christian; Denison, Hans; Jensen, Jörgen M; Lehmann, Anders; Ruth, Magnus; Söderlind, Erik; Aurell-Holmberg, Ann

    2016-04-01

    Lesogaberan, a γ-aminobutyric acid (GABA)B receptor agonist, was developed for the treatment of gastroesophageal reflux disease in patients with a partial response to proton pump inhibitor therapy. A high prevalence of paresthesia was observed in healthy individuals after dosing with lesogaberan in early-phase clinical trials. The aim of this review was to gain further insight into paresthesia caused by lesogaberan by summarizing the relevant preclinical and clinical data. This study was a narrative review of the literature and unpublished data. The occurrence of paresthesia may depend on the route or rate of drug administration; several studies were conducted to test this hypothesis, and formulations were developed to minimize the occurrence of paresthesia. Phase I clinical studies showed that, in healthy individuals, paresthesia occurred soon after administration of lesogaberan in a dose-dependent manner regardless of the route of administration. The occurrence of paresthesia could be decreased by fractionating the dose or reducing the rate of administration. These findings suggest that the initial rate of absorption plays an important part in the development of paresthesia. Modified-release formulations minimize the occurrence of paresthesia while retaining the anti-reflux activity of the drug, as measured by esophageal pH and the number of transient lower esophageal sphincter relaxations. The development of lesogaberan was halted because the effect on gastroesophageal reflux disease symptoms observed in Phase II studies was not considered clinically meaningful in the target patient population. Nevertheless, it is an example of successful formulation development designed to minimize the occurrence of a compound's adverse effect while retaining its pharmacodynamic action. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  13. Occurrence of paresthesia after dental local anesthetic administration in the United States.

    Science.gov (United States)

    Garisto, Gabriella A; Gaffen, Andrew S; Lawrence, Herenia P; Tenenbaum, Howard C; Haas, Daniel A

    2010-07-01

    Several studies have suggested that the likelihood of paresthesia may depend on the local anesthetic used. The purpose of this study was to determine if the type of local anesthetic administered had any effect on reports of paresthesia in dentistry in the United States. The authors obtained reports of paresthesia involving dental local anesthetics during the period from November 1997 through August 2008 from the U.S. Food and Drug Administration Adverse Event Reporting System. They used chi(2) analysis to compare expected frequencies, on the basis of U.S. local anesthetic sales data, with observed reports of oral paresthesia. During the study period, 248 cases of paresthesia occurring after dental procedures were reported. Most cases (94.5 percent) involved mandibular nerve block. The lingual nerve was affected in 89.0 percent of cases. Reports involving 4 percent prilocaine and 4 percent articaine were 7.3 and 3.6 times, respectively, greater than expected (chi(2), P paresthesia occurs more commonly after use of 4 percent local anesthetic formulations. These findings are consistent with those reported in a number of studies from other countries. Until further research indicates otherwise, dentists should consider these results when assessing the risks and benefits of using 4 percent local anesthetics for mandibular block anesthesia.

  14. Disturbances of sensation occasioned by experimental arrest of blood flow

    Directory of Open Access Journals (Sweden)

    Alfred Auersperg

    1949-12-01

    Full Text Available Disturbances of sensation in the hand were studied during and after experimental arrest of circulation to the arm. Blockage of circulation was performed as outlined by Lewis and Pochin, by putting the cuff of a sphygmomanometer on the upper arm and bringing the pressure rapidly up to 200 mm/Hg. The experiments listed below were intended to demonstrate the variability of a central reaction brought about by fairly definite disturbances of the ischaemic periphery. All experiments were made on the present writers and repeated on nine other subjects, none of whom had systolic pressure reaching 150 mm/Hg. I - Blockage of circulation in both arms led to symmetrical phenomena in both hands (thermal paresthesias, tingling and hyposthesia, both under symmetrical experimental circumstances, and under the following variations: So long as the cuff pressure on both arms was above the systolic blood pressure, differences as great as 300 mm/Hg in one cuff and 150 mm in the other did not alter the symmetry of the effects. Neither was symmetry and synchronism of paresthesias affected when compression on one side preceded equal compression on the other up to 20 seconds. II - When a punctate pressure is applied to the paresthetic field the paresthesias disappear around that point and the latter is clearly brought out from the indifferent background produced in the area of depressed skin. On the basis of Kugelberg's findings, it seems that this occurs because the impulses caused by pressure have a higher frequency and substitute the spontaneous abnormal discharges of the ischaemic nerve fibers. III - Repeated mechanical stimulation of a fingertip during the experiment failed to show any influence on sensory (touch thresholds, in contrast, therefore, to what would be expected on the basis of the physiologic experiments which show rapid fatigue of ischaemic structures. IV - In contrast to what might be expected from the intense changes undergone by receptors in the

  15. Primary somatosensory/motor cortical thickness distinguishes paresthesia-dominant from pain-dominant carpal tunnel syndrome.

    Science.gov (United States)

    Maeda, Yumi; Kettner, Norman; Kim, Jieun; Kim, Hyungjun; Cina, Stephen; Malatesta, Cristina; Gerber, Jessica; McManus, Claire; Libby, Alexandra; Mezzacappa, Pia; Mawla, Ishtiaq; Morse, Leslie R; Audette, Joseph; Napadow, Vitaly

    2016-05-01

    Paresthesia-dominant and pain-dominant subgroups have been noted in carpal tunnel syndrome (CTS), a peripheral neuropathic disorder characterized by altered primary somatosensory/motor (S1/M1) physiology. We aimed to investigate whether brain morphometry dissociates these subgroups. Subjects with CTS were evaluated with nerve conduction studies, whereas symptom severity ratings were used to allocate subjects into paresthesia-dominant (CTS-paresthesia), pain-dominant (CTS-pain), and pain/paresthesia nondominant (not included in further analysis) subgroups. Structural brain magnetic resonance imaging data were acquired at 3T using a multiecho MPRAGE T1-weighted pulse sequence, and gray matter cortical thickness was calculated across the entire brain using validated, automated methods. CTS-paresthesia subjects demonstrated reduced median sensory nerve conduction velocity (P = 0.05) compared with CTS-pain subjects. In addition, cortical thickness in precentral and postcentral gyri (S1/M1 hand area) contralateral to the more affected hand was significantly reduced in CTS-paresthesia subgroup compared with CTS-pain subgroup. Moreover, in CTS-paresthesia subjects, precentral cortical thickness was negatively correlated with paresthesia severity (r(34) = -0.40, P = 0.016) and positively correlated with median nerve sensory velocity (r(36) = 0.51, P = 0.001), but not with pain severity. Conversely, in CTS-pain subjects, contralesional S1 (r(9) = 0.62, P = 0.042) and M1 (r(9) = 0.61, P = 0.046) cortical thickness were correlated with pain severity, but not median nerve velocity or paresthesia severity. This double dissociation in somatotopically specific S1/M1 areas suggests a neuroanatomical substrate for symptom-based CTS subgroups. Such fine-grained subgrouping of CTS may lead to improved personalized therapeutic approaches, based on superior characterization of the linkage between peripheral and central neuroplasticity.

  16. Diagnostic role of magnetic resonance imaging in assessing orofacial pain and paresthesia.

    Science.gov (United States)

    Ohba, Seigo; Yoshimura, Hitoshi; Matsuda, Shinpei; Kobayashi, Junichi; Kimura, Takashi; Aiki, Minako; Sano, Kazuo

    2014-09-01

    The aim of this study was to compare the efficacy of CT and MRI in evaluating orofacial pain and paresthesia. A total of 96 patients with orofacial pain and/or paresthesia were included in this study. The patients who underwent CT and/or MRI examinations were assessed, and the efficacy of CT and/or MRI examinations in detecting the causative disease of the orofacial pain and paresthesia was evaluated. Seventy (72.9%) of 96 patients underwent CT and/or MRI examinations. Whereas CT examinations detected 2 diseases (4.5%) in 44 tests, 13 diseases (37.1%) were detected in 35 MRI examinations. Seven (53.8%) of 13 diseases, which were detected by MRI, were found in elderly patients. A high percentage of patients, who claimed orofacial pain and paresthesia, have other diseases in their brain, especially in elderly patients, and MRI is more useful than CT for evaluating these patients.

  17. Arthroscopic treatment of femoral nerve paresthesia caused by an acetabular paralabral cyst.

    Science.gov (United States)

    Kanauchi, Taira; Suganuma, Jun; Mochizuki, Ryuta; Uchikawa, Shinichi

    2014-05-01

    This report describes a rare case of femoral nerve paresthesia caused by an acetabular paralabral cyst of the hip joint. A 68-year-old woman presented with a 6-month history of right hip pain and paresthesia along the anterior thigh and radiating down to the anterior aspect of the knee. Radiography showed osteoarthritis with a narrowed joint space in the right hip joint. Magnetic resonance imaging showed a cyst with low T1- and high T2-weighted signal intensity arising from a labral tear at the anterior aspect of the acetabulum. The cyst was connected to the joint space and displaced the femoral nerve to the anteromedial side. The lesion was diagnosed as an acetabular paralabral cyst causing femoral neuropathy. Because the main symptom was femoral nerve paresthesia and the patient desired a less invasive procedure, arthroscopic labral repair was performed to stop synovial fluid flow to the paralabral cyst that was causing the femoral nerve paresthesia. After surgery, the cyst and femoral nerve paresthesia disappeared. At the 18-month follow-up, the patient had no recurrence. There have been several reports of neurovascular compression caused by the cyst around the hip joint. To the authors' knowledge, only 3 cases of acetabular paralabral cysts causing sciatica have been reported. The current patient appears to represent a rare case of an acetabular paralabral cyst causing femoral nerve paresthesia. The authors suggest that arthroscopic labral repair for an acetabular paralabral cyst causing neuropathy can be an option for patients who desire a less invasive procedure. Copyright 2014, SLACK Incorporated.

  18. Preferences in Sleep Position Correlate With Nighttime Paresthesias in Healthy People Without Carpal Tunnel Syndrome.

    Science.gov (United States)

    Roth Bettlach, Carrie L; Hasak, Jessica M; Krauss, Emily M; Yu, Jenny L; Skolnick, Gary B; Bodway, Greta N; Kahn, Lorna C; Mackinnon, Susan E

    2017-10-01

    Carpal tunnel syndrome has been associated with sleep position preferences. The aim of this study is to assess self-reported nocturnal paresthesias and sleeping position in participants with and without carpal tunnel syndrome diagnosis to further clinical knowledge for preventive and therapeutic interventions. A cross-sectional survey study of 396 participants was performed in young adults, healthy volunteers, and a patient population. Participants were surveyed on risk factors for carpal tunnel syndrome, nocturnal paresthesias, and sleep preferences. Binary logistic regression analysis was performed comparing participants with rare and frequent nocturnal paresthesias. Subanalyses for participants without carpal tunnel syndrome under and over 21 years of age were performed on all factors significantly associated with subclinical compression neuropathy in the overall population. Thirty-three percent of the study population experienced nocturnal paresthesias at least weekly. Increased body mass index ( P < .001) and sleeping with the wrist flexed ( P = .030) were associated with a higher frequency of nocturnal paresthesias. Side sleeping was associated with less frequent nocturnal symptoms ( P = .003). In participants without carpal tunnel syndrome, subgroup analysis illustrated a relationship between nocturnal paresthesias and wrist position. In participants with carpal tunnel syndrome, sleeping on the side had a significantly reduced frequency of nocturnal paresthesias. This study illustrates nocturnal paresthesias in people without history of carpal tunnel syndrome including people younger than previously reported. In healthy patients with upper extremity subclinical compression neuropathy, sleep position modification may be a useful intervention to reduce the frequency of nocturnal symptoms prior to developing carpal tunnel syndrome.

  19. The bothersomeness of sciatica: patients' self-report of paresthesia, weakness and leg pain.

    Science.gov (United States)

    Grøvle, Lars; Haugen, Anne Julsrud; Keller, Anne; Natvig, Bård; Brox, Jens Ivar; Grotle, Margreth

    2010-02-01

    The objective of the study was to investigate how patients with sciatica due to disc herniation rate the bothersomeness of paresthesia and weakness as compared to leg pain, and how these symptoms are associated with socio-demographic and clinical characteristics. A cross-sectional study was conducted on 411 patients with clinical signs of radiculopathy. Items from the Sciatica Bothersomeness Index (0 = none to 6 = extremely) were used to establish values for paresthesia, weakness and leg pain. Associations with socio-demographic and clinical variables were analyzed by multiple linear regression. Mean scores (SD) were 4.5 (1.5) for leg pain, 3.4 (1.8) for paresthesia and 2.6 (2.0) for weakness. Women reported higher levels of bothersomeness for all three symptoms with mean scores approximately 10% higher than men. In the multivariate models, more severe symptoms were associated with lower physical function and higher emotional distress. Muscular paresis explained 19% of the variability in self-reported weakness, sensory findings explained 10% of the variability in paresthesia, and straight leg raising test explained 9% of the variability in leg pain. In addition to leg pain, paresthesia and weakness should be assessed when measuring symptom severity in sciatica.

  20. Inferior alveolar nerve paresthesia after overfilling of endodontic sealer into the mandibular canal.

    Science.gov (United States)

    González-Martín, Maribel; Torres-Lagares, Daniel; Gutiérrez-Pérez, José Luis; Segura-Egea, Juan José

    2010-08-01

    The present study describes a case of endodontic sealer (AH Plus) penetration within and along the mandibular canal from the periapical zone of a lower second molar after endodontic treatment. The clinical manifestations comprised anesthesia of the left side of the lower lip, paresthesia and anesthesia of the gums in the third quadrant, and paresthesia and anesthesia of the left mental nerve, appearing immediately after endodontic treatment. The paresthesia and anesthesia of the lip and gums were seen to decrease, but the mental nerve paresthesia and anesthesia persisted after 3.5 years. This case illustrates the need to expend great care with all endodontic techniques when performing nonsurgical root canal therapy, especially when the root apices are in close proximity to vital anatomic structures such as the inferior alveolar canal. Copyright 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  1. [Paresthesia and spinal anesthesia for cesarean section: comparison of patient positioning].

    Science.gov (United States)

    Palacio Abizanda, F J; Reina, M A; Fornet, I; López, A; López López, M A; Morillas Sendín, P

    2009-01-01

    To determine the incidence of paresthesia during lumbar puncture performed with the patient in different positions. A single-blind prospective study of patients scheduled for elective cesarean section, randomized to 3 groups. In group 1 patients were seated in the direction of the long axis of the table, with heels resting on the table. In group 2 they were seated perpendicular to the long axis of the table, with legs hanging from the table. In group 3 they were in left lateral decubitus position. Lumbar punctures were performed with a 27-gauge Whitacre needle. One hundred sixty-eight patients (56 per group) were enrolled. Paresthesia occurred most often in group 3 (P = .009). We observed no differences in blood pressure after patients moved from decubitus position to the assigned position. Nor did we observe between-group differences in blood pressure according to position taken during puncture. Puncture undertaken with the patient seated, heels on the table and knees slightly bent, is associated with a lower incidence of paresthesia than puncture performed with the patient seated, legs hanging from the table. Placing the patient's heels on the table requires hip flexion and leads to anterior displacement of nerve roots in the dural sac. Such displacement would increase the nerve-free zone on the posterior side of the sac, thereby decreasing the likelihood of paresthesia during lumbar puncture. A left lateral decubitus position would increase the likelihood of paresthesia, possibly because the anesthetist may inadvertently not follow the medial line when inserting the needle.

  2. Risk factors for hypertrophic burn scar pain, pruritus, and paresthesia development.

    Science.gov (United States)

    Xiao, Yongqiang; Sun, Yu; Zhu, Banghui; Wang, Kangan; Liang, Pengfei; Liu, Wenjun; Fu, Jinfeng; Zheng, Shiqing; Xiao, Shichu; Xia, Zhaofan

    2018-05-02

    Hypertrophic scar pain, pruritus, and paresthesia symptoms are major and particular concerns for burn patients. However, because no effective and satisfactory methods exist for their alleviation, the clinical treatment for these symptoms is generally considered unsatisfactory. Therefore, their risk factors should be identified and prevented during management. We reviewed the medical records of 129 post-burn hypertrophy scar patients and divided them into two groups for each of three different symptoms based on the University of North Carolina "4P" Scar Scale: patients with scar pain requiring occasional or continuous pharmacological intervention (HSc pain, n=75) vs. patients without such scar pain (No HSc pain, n=54); patients with scar pruritus requiring occasional or continuous pharmacological intervention (HSc pruritus, n=63) vs. patients without such scar pruritus (No HSc pruritus, n=66); patients with scar paresthesia that influenced the patients' daily activities (HSc paresthesia, n=31) vs. patients without such scar paresthesia (No HSc paresthesia, n=98). Three multivariable logistic regression models were built, respectively, to identify the risk factors for hypertrophic burn scar pain, pruritus, and paresthesia development. Multivariable analysis showed that hypertrophic burn scar pain development requiring pharmacological intervention was associated with old age (odds ratio [OR]=1.046; 95% confidence interval [CI], 1.011-1.082, p=0.009), high body mass index(OR=1.242; 95%CI,1.068-1.445, p=0.005), 2-5-mm-thick post-burn hypertrophic scars (OR=3.997; 95%CI, 1.523-10.487; p=0.005), and 6-12-month post-burn hypertrophic scars (OR=4.686; 95%CI; 1.318-16.653; p=0.017). Hypertrophic burn scar pruritus development requiring pharmacological intervention was associated with smoking (OR=3.239; 95%CI, 1.380-7.603; p=0.007), having undergone surgical operation (OR=2.236; 95%CI, 1.001-4.998; p=0.049), and firm scars (OR=3.317; 95%CI, 1.237-8.894; p=0.017). Finally

  3. The bothersomeness of sciatica: patients’ self-report of paresthesia, weakness and leg pain

    Science.gov (United States)

    Haugen, Anne Julsrud; Keller, Anne; Natvig, Bård; Brox, Jens Ivar; Grotle, Margreth

    2009-01-01

    The objective of the study was to investigate how patients with sciatica due to disc herniation rate the bothersomeness of paresthesia and weakness as compared to leg pain, and how these symptoms are associated with socio-demographic and clinical characteristics. A cross-sectional study was conducted on 411 patients with clinical signs of radiculopathy. Items from the Sciatica Bothersomeness Index (0 = none to 6 = extremely) were used to establish values for paresthesia, weakness and leg pain. Associations with socio-demographic and clinical variables were analyzed by multiple linear regression. Mean scores (SD) were 4.5 (1.5) for leg pain, 3.4 (1.8) for paresthesia and 2.6 (2.0) for weakness. Women reported higher levels of bothersomeness for all three symptoms with mean scores approximately 10% higher than men. In the multivariate models, more severe symptoms were associated with lower physical function and higher emotional distress. Muscular paresis explained 19% of the variability in self-reported weakness, sensory findings explained 10% of the variability in paresthesia, and straight leg raising test explained 9% of the variability in leg pain. In addition to leg pain, paresthesia and weakness should be assessed when measuring symptom severity in sciatica. PMID:19488793

  4. Evaluation of two different epidural catheters in clinical practice. narrowing down the incidence of paresthesia!

    Science.gov (United States)

    Bouman, E A C; Gramke, H F; Wetzel, N; Vanderbroeck, T H T; Bruinsma, R; Theunissen, M; Kerkkamp, H E M; Marcus, M A E

    2007-01-01

    Although epidural anesthesia is considered safe, several complications may occur during puncture and insertion of a catheter. Incidences of paresthesia vary between 0.2 and 56%. A prospective, open, cohort-controlled pilot study was conducted in 188 patients, ASA I-III, age 19-87 years, scheduled for elective surgery and epidural anesthesia. We evaluated a 20 G polyamide (standard) catheter and a 20 G combined polyurethane-polyamide (new) catheter. Spontaneous reactions upon catheter-insertion, paresthesia on questioning, inadvertent dural or intravascular puncture, and reasons for early catheter removal were recorded. The incidence of paresthesia reported spontaneously was 21.3% with the standard catheter and 16.7% with the new catheter. Systematically asking for paresthesia almost doubled the paraesthesia rate. Intravascular cannulation occurred in 5%. No accidental dural punctures occurred. An overall incidence of 13.3% of technical problems led to early catheter removal. The new catheter was at least equivalent to the standard regarding epidural success rate and safety : rate of paresthesia, intravascular and dural cannulation.

  5. Infection Related Inferior Alveolar Nerve Paresthesia in the Lower Premolar Teeth

    OpenAIRE

    Censi, R.; Vavassori, V.; Borgonovo, A.E.; Re, D.

    2016-01-01

    Introduction. The aim of this paper was to describe two cases of IAN infection-induced paresthesia and to discuss the most appropriate treatment solutions. Methods. For two patients, periapical lesions that induced IAN paresthesia were revealed. In the first case, the tooth was previously endodontically treated, whereas in the second case the lesion was due to pulp necrosis. Results. For the first patient, a progressive healing was observed only after the tooth extraction. In the second patie...

  6. An uncommon clinical feature of IAN injury after third molar removal: a delayed paresthesia case series and literature review.

    Science.gov (United States)

    Borgonovo, Andrea; Bianchi, Albino; Marchetti, Andrea; Censi, Rachele; Maiorana, Carlo

    2012-05-01

    After an inferior alveolar nerve (IAN) injury, the onset of altered sensation usually begins immediately after surgery. However, it sometimes begins after several days, which is referred to as delayed paresthesia. The authors considered three different etiologies that likely produce inflammation along the nerve trunk and cause delayed paresthesia: compression of the clot, fibrous reorganization of the clot, and nerve trauma caused by bone fragments during clot organization. The aim of this article was to evaluate the etiology of IAN delayed paresthesia, analyze the literature, present a case series related to three different causes of this pathology, and compare delayed paresthesia with the classic immediate symptomatic paresthesia.

  7. Spinal Cord Stimulation for Treating Chronic Pain: Reviewing Preclinical and Clinical Data on Paresthesia-Free High-Frequency Therapy.

    Science.gov (United States)

    Chakravarthy, Krishnan; Richter, Hira; Christo, Paul J; Williams, Kayode; Guan, Yun

    2018-01-01

    Traditional spinal cord stimulation (SCS) requires that paresthesia overlaps chronic painful areas. However, the new paradigm high-frequency SCS (HF-SCS) does not rely on paresthesia. A review of preclinical and clinical studies regarding the use of paresthesia-free HF-SCS for various chronic pain states. We reviewed available literatures on HF-SCS, including Nevro's paresthesia-free ultra high-frequency 10 kHz therapy (HF10-SCS). Data sources included relevant literature identified through searches of PubMed, MEDLINE/OVID, and SCOPUS, and manual searches of the bibliographies of known primary and review articles. The primary goal is to describe the present developing conceptions of preclinical mechanisms of HF-SCS and to review clinical efficacy on paresthesia-free HF10-SCS for various chronic pain states. HF10-SCS offers a novel pain reduction tool without paresthesia for failed back surgery syndrome and chronic axial back pain. Preclinical findings indicate that potential mechanisms of action for paresthesia-free HF-SCS differ from those of traditional SCS. To fully understand and utilize paresthesia-free HF-SCS, mechanistic study and translational research will be very important, with increasing collaboration between basic science and clinical communities to design better trials and optimize the therapy based on mechanistic findings from effective preclinical models and approaches. Future research in these vital areas may include preclinical and clinical components conducted in parallel to optimize the potential of this technology. © 2017 International Neuromodulation Society.

  8. Sub-paresthesia spinal cord stimulation reverses thermal hyperalgesia and modulates low frequency EEG in a rat model of neuropathic pain.

    Science.gov (United States)

    Koyama, Suguru; Xia, Jimmy; Leblanc, Brian W; Gu, Jianwen Wendy; Saab, Carl Y

    2018-05-08

    Paresthesia, a common feature of epidural spinal cord stimulation (SCS) for pain management, presents a challenge to the double-blind study design. Although sub-paresthesia SCS has been shown to be effective in alleviating pain, empirical criteria for sub-paresthesia SCS have not been established and its basic mechanisms of action at supraspinal levels are unknown. We tested our hypothesis that sub-paresthesia SCS attenuates behavioral signs of neuropathic pain in a rat model, and modulates pain-related theta (4-8 Hz) power of the electroencephalogram (EEG), a previously validated correlate of spontaneous pain in rodent models. Results show that sub-paresthesia SCS attenuates thermal hyperalgesia and power amplitude in the 3-4 Hz range, consistent with clinical data showing significant yet modest analgesic effects of sub-paresthesia SCS in humans. Therefore, we present evidence for anti-nociceptive effects of sub-paresthesia SCS in a rat model of neuropathic pain and further validate EEG theta power as a reliable 'biosignature' of spontaneous pain.

  9. Exercising Impacts on Fatigue, Depression, and Paresthesia in Female Patients with Multiple Sclerosis.

    Science.gov (United States)

    Razazian, Nazanin; Yavari, Zeinab; Farnia, Vahid; Azizi, Akram; Kordavani, Laleh; Bahmani, Dena Sadeghi; Holsboer-Trachsler, Edith; Brand, Serge

    2016-05-01

    Multiple sclerosis (MS) is a chronic progressive autoimmune disease impacting both body and mind. Typically, patients with MS report fatigue, depression, and paresthesia. Standard treatment consists of immune modulatory medication, though there is growing evidence that exercising programs have a positive influence on fatigue and psychological symptoms such as depression. We tested the hypothesis that, in addition to the standard immune regulatory medication, either yoga or aquatic exercise can ameliorate both fatigue and depression, and we examined whether these interventions also influence paresthesia compared with a nonexercise control condition. Fifty-four women with MS (mean age: M = 33.94 yr, SD = 6.92) were randomly assigned to one of the following conditions: yoga, aquatic exercise, or nonexercise control. Their existing immune modulatory therapy remained unchanged. Participants completed questionnaires covering symptoms of fatigue, depression, and paresthesia, both at baseline and on completion of the study 8 wk later. Compared with the nonexercise control condition and over time, fatigue, depression, and paresthesia decreased significantly in the yoga and aquatic exercise groups. On study completion, the likelihood of reporting moderate to severe depression was 35-fold higher in the nonexercise control condition than in the intervention conditions (yoga and aquatic exercising values collapsed). The pattern of results suggests that for females with MS and treated with standard immune regulatory medication, exercise training programs such as yoga and aquatic exercising positively impact on core symptoms of MS, namely, fatigue, depression, and paresthesia. Exercise training programs should be considered in the future as possible complements to standard treatments.

  10. An assessment of adult risks of paresthesia due to mercury from coal combustion

    Energy Technology Data Exchange (ETDEWEB)

    Lipfert, F.W.; Moskowitz, P.D.; Fthenakis, V.; Dephillips, M.; Viren, J.; Saroff, L. [Brookhaven National Laboratory, Upton, NY (United States). Dept. of Applied Science

    1995-02-01

    This paper presents a probabilistic assessment of the risks of transient adult paresthesia (tingling of the extremities) resulting from ingestion of methylmercury (MeHg) in fish and shellfish. Two scenarios are evaluated: the baseline, in which the MeHg dose results from the combined effects of eating canned tuna fish, various marine seafood, and freshwater sportfish, and an impact scenario in which the Hg content of the freshwater sportfish is increased due to local deposition from a hypothetical 1000 Mw{sub e} coal-fired power plant. Measurements from the literature are used to establish the parameters of the baseline, including atmospheric rates of Hg deposition and the distributions of MeHg in fish. The Hg intake for the impact scenario is then based on linear scaling of the additional annual Hg deposition as estimated from a Guassian plume dispersion model. Human health responses are based on a logistic fit to the frequencies of paresthesia observed during a grain poisoning incident in Iraq 1971-2. Based on a background prevalence rate of 2.2% for adult paresthesia, the assessment predicts a 5% chance that the increase in paresthesia prevalence due to either baseline or incremental MeHg doses might approach about 1% of the background prevalence rate. 15 refs., 3 figs., 3 tabs.

  11. [Facial diplegia with atypical paresthesia. A variant of Guillain-Barré syndrome].

    Science.gov (United States)

    Dal Verme, Agustín; Acosta, Paula; Margan, Mercedes; Pagnini, Cecilia; Dellepiane, Eugenia; Peralta, Christian

    2015-01-01

    Guillain-Barré syndrome is an acute demyelinating disease which presents in a classic form with muscular weakness and the lack of reflexes. There are multiple variations and atypical forms of the disease, being facial diplegia with paresthesia one of them. Also, the absence of reflexes in this syndrome is typical but not constant, since 10% of patients present reflexes. We describe a case of atypical presentation with bilateral facial palsy, paresthesia, brisk reflexes and weakness in the lower limbs in a 33 year old woman.

  12. Flare-up with associated paresthesia of a mandibular second premolar with three root canals.

    Science.gov (United States)

    Glassman, G D

    1987-07-01

    A case report is presented that deals with mental nerve paresthesia resulting from the "flare-up" of a mandibular second premolar with three root canals. A review of the literature and discussion follow, which suggest possible mechanisms that may be responsible for paresthesia as well as treatment regimens that may be used to minimize the incidence of this unexpected but occasional post-treatment endodontic sequela.

  13. Direction of catheter insertion and the incidence of paresthesia during continuous epidural anesthesia in the elderly patients

    Science.gov (United States)

    Kim, Jong-Hak; Lee, Jun Seop

    2013-01-01

    Background Continuous epidural anesthesia is useful for endoscopic urologic surgery, as mostly performed in the elderly patients. In such a case, it is necessary to obtain successful sacral anesthesia, and the insertion of epidural catheter in the caudad direction may be needed. However, continuous epidural catherization has been related to paresthesias. This study aimed to evaluate the effects of the direction of the catheter insertion on the incidence of paresthesias in the elderly patients. Methods Two hundred elderly patients scheduled for endoscopic urologic surgery were enrolled. The epidural catheter was inserted at L2-3, L3-4, and L4-5 using the Tuohy needle. In Group I (n = 100), the Tuohy needle with the bevel directed the cephalad during the catheter insertion. In Group II (n = 100), it directed the caudad. During the catheter insertion, an anesthesiologist evaluated the presence of paresthesias and the ease or difficulty during the catheter insertion. Results In Group I (n = 97), 15.5% of the patients had paresthesias versus 18.4% in Group II (n = 98), and there was no significant difference between the two groups. In paresthesia depending on the insertion site and the ease or difficulty during the catheter insertion, there were no significant differences between the two groups. Conclusions Our results concluded that the direction of epidural catheter insertion did not significantly influence the incidence of paresthesias in the elderly patients. PMID:23741568

  14. Prevalence and profile of sleep disturbances in Guillain-Barre Syndrome: a prospective questionnaire-based study during 10 days of hospitalization.

    Science.gov (United States)

    Karkare, K; Sinha, S; Taly, A B; Rao, S

    2013-02-01

    Sleep disturbances in Guillain-Barre Syndrome (GBS), though common, have not received focused attention. To study frequency and nature of sleep disturbances in patients with GBS, using validated questionnaires, and analyze the contributing factors. This prospective study included 60 patients fulfilling National Institute of Neurological and Communicative Diseases and Stroke (NINCDS) criteria for GBS (mean age: 32.7 ± 12.9 years; median: 30 years; M:F = 46:14), evaluated from 2008 to 2010. Data regarding sleep were collected on 10 consecutive days following admission using Richard Campbell Sleep score, St Mary's Hospital Sleep Questionnaire, and Pittsburgh Sleep Quality Index (PSQI) and correlated with various possible contributing factors like pain, paresthesia, anxiety, depression, autonomic dysfunctions, severity of disease, and therapeutic interventions among others. Qualitative and quantitative sleep disturbances were rather frequent and involved over 50% patients: abnormal PSQI - 13.3%, abnormal score on Richard scale - 51.6%, abnormal sleep onset latency - 35%, sleep fragmentation - 40%, and reduced sleep duration - 46.6%. The symptoms were severe during the first week of hospitalization and reduced thereafter. Sleep disturbances as scored on Richard scale significantly correlated with anxiety, pain, paresthesia, and severity of immobility (P < 0.05) but not with depression and use of analgesics or antineuritic drugs. This study first of its kind suggests that sleep disturbance in GBS is frequent, multi-factorial, often disturbing, and varies during the course of illness. Routine enquiry into the sleep disturbances and timely intervention may reduce morbidity and improve their quality of life. © 2012 John Wiley & Sons A/S.

  15. Cortical integrity of the inferior alveolar canal as a predictor of paresthesia after third-molar extraction.

    Science.gov (United States)

    Park, Wonse; Choi, Ji-Wook; Kim, Jae-Young; Kim, Bong-Chul; Kim, Hyung Jun; Lee, Sang-Hwy

    2010-03-01

    Paresthesia is a well-known complication of extraction of mandibular third molars (MTMs). The authors evaluated the relationship between paresthesia after MTM extraction and the cortical integrity of the inferior alveolar canal (IAC) by using computed tomography (CT). The authors designed a retrospective cohort study involving participants considered, on the basis of panoramic imaging, to be at high risk of experiencing injury of the inferior alveolar nerve who subsequently underwent CT imaging and extraction of the MTMs. The primary predictor variable was the contact relationship between the IAC and the MTM as viewed on a CT image, classified into three groups: group 1, no contact; group 2, contact between the MTM and the intact IAC cortex; group 3, contact between the MTM and the interrupted IAC cortex. The secondary predictor variable was the number of CT image slices showing the cortical interruption around the MTM. The outcome variable was the presence or absence of postoperative paresthesia after MTM extraction. The study sample comprised 179 participants who underwent MTM extraction (a total of 259 MTMs). Their mean age was 23.6 years, and 85 (47.5 percent) were male. The overall prevalence of paresthesia was 4.2 percent (11 of 259 teeth). The prevalence of paresthesia in group 3 (involving an interrupted IAC cortex) was 11.8 percent (10 of 85 cases), while for group 2 (involving an intact IAC cortex) and group 1 (involving no contact) it was 1.0 percent (1 of 98 cases) and 0.0 percent (no cases), respectively. The frequency of nerve damage increased with the number of CT image slices showing loss of cortical integrity (P=.043). The results of this study indicate that loss of IAC cortical integrity is associated with an increased risk of experiencing paresthesia after MTM extraction.

  16. Physiological basis of tingling paresthesia evoked by hydroxy-α-sanshool

    Science.gov (United States)

    Lennertz, Richard C; Tsunozaki, Makoto; Bautista, Diana M; Stucky, Cheryl L

    2010-01-01

    Hydroxy-α-sanshool, the active ingredient in plants of the prickly ash plant family, induces robust tingling paresthesia by activating a subset of somatosensory neurons. However, the subtypes and physiological function of sanshool-sensitive neurons remain unknown. Here we use the ex vivo skin-nerve preparation to examine the pattern and intensity with which the sensory terminals of cutaneous neurons respond to hydroxy-α-sanshool. We found that sanshool excites virtually all D-hair afferents, a distinct subset of ultra-sensitive light touch receptors in the skin, and targets novel populations of Aβ and C-fiber nerve afferents. Thus, sanshool provides a novel pharmacological tool for discriminating functional subtypes of cutaneous mechanoreceptors. The identification of sanshool-sensitive fibers represents an essential first step in identifying the cellular and molecular mechanisms underlying tingling paresthesia that accompanies peripheral neuropathy and injury. PMID:20335471

  17. Physiological basis of tingling paresthesia evoked by hydroxy-alpha-sanshool.

    Science.gov (United States)

    Lennertz, Richard C; Tsunozaki, Makoto; Bautista, Diana M; Stucky, Cheryl L

    2010-03-24

    Hydroxy-alpha-sanshool, the active ingredient in plants of the prickly ash plant family, induces robust tingling paresthesia by activating a subset of somatosensory neurons. However, the subtypes and physiological function of sanshool-sensitive neurons remain unknown. Here we use the ex vivo skin-nerve preparation to examine the pattern and intensity with which the sensory terminals of cutaneous neurons respond to hydroxy-alpha-sanshool. We found that sanshool excites virtually all D-hair afferents, a distinct subset of ultrasensitive light-touch receptors in the skin and targets novel populations of Abeta and C fiber nerve afferents. Thus, sanshool provides a novel pharmacological tool for discriminating functional subtypes of cutaneous mechanoreceptors. The identification of sanshool-sensitive fibers represents an essential first step in identifying the cellular and molecular mechanisms underlying tingling paresthesia that accompanies peripheral neuropathy and injury.

  18. Eagle's syndrome associated with lingual nerve paresthesia: a case report.

    Science.gov (United States)

    Dong, Zhiwei; Bao, Haihong; Zhang, Li; Hua, Zequan

    2014-05-01

    Eagle's syndrome is characterized by a variety of symptoms, including throat pain, sensation of a foreign body in the pharynx, dysphagia, referred otalgia, and neck and throat pain exacerbated by head rotation. Any styloid process longer than 25 mm should be considered elongated and will usually be responsible for Eagle's syndrome. Surgical resection of the elongated styloid is a routine treatment and can be accomplished using a transoral or an extraoral approach. We report a patient with a rare giant styloid process that was approximately 81.7 mm. He complained of a rare symptom: hemitongue paresthesia. After removal of the elongated styloid process using the extraoral approach, his symptoms, including the hemitongue paresthesia, were alleviated. We concluded that if the styloid process displays medium to severe elongation, the extraoral approach will be appropriate. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Influence of upper extremity positioning on pain, paresthesia, and tolerance: advancing current practice.

    Science.gov (United States)

    Lester, Mark E; Hazelton, Jill; Dewey, William S; Casey, James C; Richard, Reginald

    2013-01-01

    Loss of upper extremity motion caused by axillary burn scar contracture is a major complication of burn injury. Positioning acutely injured patients with axillary burns in positions above 90° of shoulder abduction may improve shoulder motion and minimize scar contracture. However, these positions may increase injury risk to the nerves of the brachial plexus. This study evaluated the occurrence of paresthesias, pain, and positional intolerance in four shoulder abduction positions in healthy adults. Sixty men and women were placed in four randomly assigned shoulder abduction positions for up to 2 hours: 1) 90° with elbow extension (90 ABD); 2) 130° with elbow flexion at 110° (130 ABD); 3) 150° with elbow extension (150 ABD); and 4) 170° with elbow extension (170 ABD). Outcome measures were assessed at baseline and every 30 minutes and included the occurrence of upper extremity paresthesias, position comfort/tolerance, and pain. Transient paresthesias, lasting less than 3 minutes, occurred in all test positions in 10 to 37% of the cases. Significantly fewer subjects reported paresthesias in the 90 ABD position compared with the other positions (P < .01). Pain was reported more frequently in the 170° position (68%) compared with the other positions (P < .01). Positioning with the elbow flexed or in terminal extension is not recommended, regardless of the degree of shoulder abduction. Positioning patients in a position of 150° of shoulder abduction was shown to be safe and well tolerated. Consideration of positions above this range should be undertaken cautiously and only with strict monitoring in alert and oriented patients for short time periods.

  20. Crosslinguistic Application of English-Centric Rhythm Descriptors in Motor Speech Disorders

    Science.gov (United States)

    Liss, Julie M.; Utianski, Rene; Lansford, Kaitlin

    2014-01-01

    Background Rhythmic disturbances are a hallmark of motor speech disorders, in which the motor control deficits interfere with the outward flow of speech and by extension speech understanding. As the functions of rhythm are language-specific, breakdowns in rhythm should have language-specific consequences for communication. Objective The goals of this paper are to (i) provide a review of the cognitive- linguistic role of rhythm in speech perception in a general sense and crosslinguistically; (ii) present new results of lexical segmentation challenges posed by different types of dysarthria in American English, and (iii) offer a framework for crosslinguistic considerations for speech rhythm disturbances in the diagnosis and treatment of communication disorders associated with motor speech disorders. Summary This review presents theoretical and empirical reasons for considering speech rhythm as a critical component of communication deficits in motor speech disorders, and addresses the need for crosslinguistic research to explore language-universal versus language-specific aspects of motor speech disorders. PMID:24157596

  1. Radiation Therapy to the Plexus Brachialis in Breast Cancer Patients: Analysis of Paresthesia in Relation to Dose and Volume.

    Science.gov (United States)

    Lundstedt, Dan; Gustafsson, Magnus; Steineck, Gunnar; Sundberg, Agnetha; Wilderäng, Ulrica; Holmberg, Erik; Johansson, Karl-Axel; Karlsson, Per

    2015-06-01

    To identify volume and dose predictors of paresthesia after irradiation of the brachial plexus among women treated for breast cancer. The women had breast surgery with axillary dissection, followed by radiation therapy with (n=192) or without irradiation (n=509) of the supraclavicular lymph nodes (SCLNs). The breast area was treated to 50 Gy in 2.0-Gy fractions, and 192 of the women also had 46 to 50 Gy to the SCLNs. We delineated the brachial plexus on 3-dimensional dose-planning computerized tomography. Three to eight years after radiation therapy the women answered a questionnaire. Irradiated volumes and doses were calculated and related to the occurrence of paresthesia in the hand. After treatment with axillary dissection with radiation therapy to the SCLNs 20% of the women reported paresthesia, compared with 13% after axillary dissection without radiation therapy, resulting in a relative risk (RR) of 1.47 (95% confidence interval [CI] 1.02-2.11). Paresthesia was reported by 25% after radiation therapy to the SCLNs with a V40 Gy ≥ 13.5 cm(3), compared with 13% without radiation therapy, RR 1.83 (95% CI 1.13-2.95). Women having a maximum dose to the brachial plexus of ≥55.0 Gy had a 25% occurrence of paresthesia, with RR 1.86 (95% CI 0.68-5.07, not significant). Our results indicate that there is a correlation between larger irradiated volumes of the brachial plexus and an increased risk of reported paresthesia among women treated for breast cancer. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A case of mental nerve paresthesia due to dynamic compression of alveolar inferior nerve along an elongated styloid process

    NARCIS (Netherlands)

    Gooris, P.J.J.; Zijlmans, J.C.M.; Bergsma, J.E.; Mensink, G.

    2014-01-01

    Spontaneous paresthesia of the mental nerve is considered an ominous clinical sign. Mental nerve paresthesia has also been referred to as numb chin syndrome. Several potentially different factors have been investigated for their role in interfering with the inferior alveolar nerve (IAN) and causing

  3. Lack of body positional effects on paresthesias when stimulating the dorsal root ganglion (DRG) in the treatment of chronic pain.

    Science.gov (United States)

    Kramer, Jeffery; Liem, Liong; Russo, Marc; Smet, Iris; Van Buyten, Jean-Pierre; Huygen, Frank

    2015-01-01

    One prominent side effect from neurostimulation techniques, and in particular spinal cord stimulation (SCS), is the change in intensity of stimulation when moving from an upright (vertical) to a recumbent or supine (horizontal) position and vice versa. It is well understood that the effects of gravity combined with highly conductive cerebrospinal fluid provide the mechanism by which changes in body position can alter the intensity of stimulation-induced paresthesias. While these effects are well established for leads that are placed within the more medial aspects of the spinal canal, little is known about these potential effects in leads placed in the lateral epidural space and in particular within the neural foramina near the dorsal root ganglion (DRG). We prospectively validated a newly developed paresthesia intensity rating scale and compared perceived paresthesia intensities when subjects assumed upright vs. supine bodily positions during neuromodulation of the DRG. On average, the correlation coefficient between stimulation intensity (pulse amplitude) and perceived paresthesia intensity was 0.83, demonstrating a strong linear relationship. No significant differences in paresthesia intensities were reported within subjects when moving from an upright (4.5 ± 0.14) to supine position 4.5 (± 0.12) (p > 0.05). This effect persisted through 12 months following implant. Neuromodulation of the DRG produces paresthesias that remain consistent across body positions, suggesting that this paradigm may be less susceptible to positional effects than dorsal column stimulation. © 2014 International Neuromodulation Society.

  4. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  5. Reversed Palmaris Longus Muscle Causing Volar Forearm Pain and Ulnar Nerve Paresthesia.

    Science.gov (United States)

    Bhashyam, Abhiram R; Harper, Carl M; Iorio, Matthew L

    2017-04-01

    A case of volar forearm pain associated with ulnar nerve paresthesia caused by a reversed palmaris longus muscle is described. The patient, an otherwise healthy 46-year-old male laborer, presented after a previous unsuccessful forearm fasciotomy for complaints of exercise exacerbated pain affecting the volar forearm associated with paresthesia in the ulnar nerve distribution. A second decompressive fasciotomy was performed revealing an anomalous "reversed" palmaris longus, with the muscle belly located distally. Resection of the anomalous muscle was performed with full relief of pain and sensory symptoms. Copyright © 2017 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  6. A Retrospective Study of Paresthesia of the Dental Alveolar Nerves

    Science.gov (United States)

    Nickel, Alfred A.

    1990-01-01

    Paresthesia is a rare clinical finding subsequent to surgery accompanied by the administration of local anesthetics. A small patient population was identified whose clinical problem may be explained by neurotoxicity due to a local anesthetic metabolite. Reasonable questions arise from these clinical observations that would benefit from prospective studies to explain sensory loss on a biochemical basis. PMID:2077986

  7. Radiation Therapy to the Plexus Brachialis in Breast Cancer Patients: Analysis of Paresthesia in Relation to Dose and Volume

    International Nuclear Information System (INIS)

    Lundstedt, Dan; Gustafsson, Magnus; Steineck, Gunnar; Sundberg, Agnetha; Wilderäng, Ulrica; Holmberg, Erik; Johansson, Karl-Axel; Karlsson, Per

    2015-01-01

    Purpose: To identify volume and dose predictors of paresthesia after irradiation of the brachial plexus among women treated for breast cancer. Methods and Materials: The women had breast surgery with axillary dissection, followed by radiation therapy with (n=192) or without irradiation (n=509) of the supraclavicular lymph nodes (SCLNs). The breast area was treated to 50 Gy in 2.0-Gy fractions, and 192 of the women also had 46 to 50 Gy to the SCLNs. We delineated the brachial plexus on 3-dimensional dose-planning computerized tomography. Three to eight years after radiation therapy the women answered a questionnaire. Irradiated volumes and doses were calculated and related to the occurrence of paresthesia in the hand. Results: After treatment with axillary dissection with radiation therapy to the SCLNs 20% of the women reported paresthesia, compared with 13% after axillary dissection without radiation therapy, resulting in a relative risk (RR) of 1.47 (95% confidence interval [CI] 1.02-2.11). Paresthesia was reported by 25% after radiation therapy to the SCLNs with a V 40 Gy  ≥ 13.5 cm 3 , compared with 13% without radiation therapy, RR 1.83 (95% CI 1.13-2.95). Women having a maximum dose to the brachial plexus of ≥55.0 Gy had a 25% occurrence of paresthesia, with RR 1.86 (95% CI 0.68-5.07, not significant). Conclusion: Our results indicate that there is a correlation between larger irradiated volumes of the brachial plexus and an increased risk of reported paresthesia among women treated for breast cancer

  8. Radiation Therapy to the Plexus Brachialis in Breast Cancer Patients: Analysis of Paresthesia in Relation to Dose and Volume

    Energy Technology Data Exchange (ETDEWEB)

    Lundstedt, Dan, E-mail: dan.lundstedt@gu.se [Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden); Division of Clinical Cancer Epidemiology, Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden); Gustafsson, Magnus [Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden); Division of Clinical Cancer Epidemiology, Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden); Department of Therapeutic Radiation Physics, Sahlgrenska University Hospital, Gothenburg (Sweden); Steineck, Gunnar [Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden); Division of Clinical Cancer Epidemiology, Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden); Division of Clinical Cancer Epidemiology, Department of Oncology-Pathology, Karolinska Institute, Stockholm (Sweden); Sundberg, Agnetha [Department of Therapeutic Radiation Physics, Sahlgrenska University Hospital, Gothenburg (Sweden); Wilderäng, Ulrica [Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden); Division of Clinical Cancer Epidemiology, Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden); Holmberg, Erik [Regional Cancer Center, Sahlgrenska University Hospital, Gothenburg (Sweden); Johansson, Karl-Axel [Department of Therapeutic Radiation Physics, Sahlgrenska University Hospital, Gothenburg (Sweden); Karlsson, Per [Department of Oncology, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg (Sweden)

    2015-06-01

    Purpose: To identify volume and dose predictors of paresthesia after irradiation of the brachial plexus among women treated for breast cancer. Methods and Materials: The women had breast surgery with axillary dissection, followed by radiation therapy with (n=192) or without irradiation (n=509) of the supraclavicular lymph nodes (SCLNs). The breast area was treated to 50 Gy in 2.0-Gy fractions, and 192 of the women also had 46 to 50 Gy to the SCLNs. We delineated the brachial plexus on 3-dimensional dose-planning computerized tomography. Three to eight years after radiation therapy the women answered a questionnaire. Irradiated volumes and doses were calculated and related to the occurrence of paresthesia in the hand. Results: After treatment with axillary dissection with radiation therapy to the SCLNs 20% of the women reported paresthesia, compared with 13% after axillary dissection without radiation therapy, resulting in a relative risk (RR) of 1.47 (95% confidence interval [CI] 1.02-2.11). Paresthesia was reported by 25% after radiation therapy to the SCLNs with a V{sub 40} {sub Gy} ≥ 13.5 cm{sup 3}, compared with 13% without radiation therapy, RR 1.83 (95% CI 1.13-2.95). Women having a maximum dose to the brachial plexus of ≥55.0 Gy had a 25% occurrence of paresthesia, with RR 1.86 (95% CI 0.68-5.07, not significant). Conclusion: Our results indicate that there is a correlation between larger irradiated volumes of the brachial plexus and an increased risk of reported paresthesia among women treated for breast cancer.

  9. Tramadol Overdose Induced Transient Paresthesia and Decreased Muscle Strength: A Case Series

    Directory of Open Access Journals (Sweden)

    Khosrow Ghasempouri

    2014-06-01

    Conclusion: Transient paresthesia and transient symmetrical decline in muscle strength of upper and lower limbs are potential neurologic complications following tramadol abuse and overdose. Further studies are needed to fully clarify the pathogenesis and mechanism of these complications following tramadol overdose.

  10. Neuropsychological analysis of a typewriting disturbance following cerebral damage.

    Science.gov (United States)

    Boyle, M; Canter, G J

    1987-01-01

    Following a left CVA, a skilled professional typist sustained a disturbance of typing disproportionate to her handwriting disturbance. Typing errors were predominantly of the sequencing type, with spatial errors much less frequent, suggesting that the impairment was based on a relatively early (premotor) stage of processing. Depriving the subject of visual feedback during handwriting greatly increased her error rate. Similarly, interfering with auditory feedback during speech substantially reduced her self-correction of speech errors. These findings suggested that impaired ability to utilize somesthetic information--probably caused by the subject's parietal lobe lesion--may have been the basis of the typing disorder.

  11. High-Frequency Stimulation of Dorsal Column Axons: Potential Underlying Mechanism of Paresthesia-Free Neuropathic Pain Relief.

    Science.gov (United States)

    Arle, Jeffrey E; Mei, Longzhi; Carlson, Kristen W; Shils, Jay L

    2016-06-01

    Spinal cord stimulation (SCS) treats neuropathic pain through retrograde stimulation of dorsal column axons and their inhibitory effects on wide dynamic range (WDR) neurons. Typical SCS uses frequencies from 50-100 Hz. Newer stimulation paradigms use high-frequency stimulation (HFS) up to 10 kHz and produce pain relief but without paresthesia. Our hypothesis is that HFS preferentially blocks larger diameter axons (12-15 µm) based on dynamics of ion channel gates and the electric potential gradient seen along the axon, resulting in inhibition of WDR cells without paresthesia. We input field potential values from a finite element model of SCS into an active axon model with ion channel subcomponents for fiber diameters 1-20 µm and simulated dynamics on a 0.001 msec time scale. Assuming some degree of wave rectification seen at the axon, action potential (AP) blockade occurs as hypothesized, preferentially in larger over smaller diameters with blockade in most medium and large diameters occurring between 4.5 and 10 kHz. Simulations show both ion channel gate and virtual anode dynamics are necessary. At clinical HFS frequencies and pulse widths, HFS preferentially blocks larger-diameter fibers and concomitantly recruits medium and smaller fibers. These effects are a result of interaction between ion gate dynamics and the "activating function" (AF) deriving from current distribution over the axon. The larger fibers that cause paresthesia in low-frequency simulation are blocked, while medium and smaller fibers are recruited, leading to paresthesia-free neuropathic pain relief by inhibiting WDR cells. © 2016 International Neuromodulation Society.

  12. Effect of Cryoanalgesia on Post Midsternotomy Pain and Paresthesia Following CABG

    Directory of Open Access Journals (Sweden)

    H Hosseini

    2009-07-01

    Full Text Available Introduction: Control of post thoracotomy pain is particularly important in prevention of post operative respiratory complications. Several methods are proposed for control of postoperative pain. Cryoanalgesia by freezing of intercostal nerves is able to providing long term pain relief in post operative period which probably results in cutaneous sensory changes. Methods: This clinical trial study was done on 124 patients who underwent CABG surgery. Patients were randomly divided in two groups; control group (group I and study group (group II. In study group cryoanalgesia was applied intraoperatively on the intercostal nerves. All of the patients received appropriate analgesia on demand in postoperative period. Pain in LIMA harvesting site and sternum was measured by visual analogue pain score before discharge, one and three months following cryoanalgesia. In all of the patient’s, presence of paresthesia was evaluated. The amount of administered analgesics (narcotic, opium, indomethacin was noted daily. Data of this investigation was analyzed and evaluated using SPSS 11.5 software. Results: Pain score of sternum was higher in study group before discharge and was lower at one and three months after operation than the control group that was statistically significant (P=0.01. Pain score of LIMA region before discharge was higher, at one month post operation was equal and at three months post operative was lower than the control group (P=0.045. Use of morphine and opium was lower (P=0.017 and use of indomethacin was higher in the cryoanalgesia group that was statistically significant (P=0.001. Incidence of paresthesia was lower in the study group (P=0.001. Conclusion: It is proposed that cryoanalgesia is a safe and effective method for reduction of pain and paresthesia and need for analgesics following CABG operation.

  13. Performance effects of acute β-alanine induced paresthesia in competitive cyclists.

    Science.gov (United States)

    Bellinger, Phillip M; Minahan, Clare L

    2016-01-01

    β-alanine is a common ingredient in supplements consumed by athletes. Indeed, athletes may believe that the β-alanine induced paresthesia, experienced shortly after ingestion, is associated with its ergogenic effect despite no scientific mechanism supporting this notion. The present study examined changes in cycling performance under conditions of β-alanine induced paresthesia. Eight competitive cyclists (VO2max = 61.8 ± 4.2 mL·kg·min(-1)) performed three practices, one baseline and four experimental trials. The experimental trials comprised a 1-km cycling time trial under four conditions with varying information (i.e., athlete informed β-alanine or placebo) and supplement content (athlete received β-alanine or placebo) delivered to the cyclist: informed β-alanine/received β-alanine, informed placebo/received β-alanine, informed β-alanine/received placebo and informed placebo/received placebo. Questionnaires were undertaken exploring the cyclists' experience of the effects of the experimental conditions. A possibly likely increase in mean power was associated with conditions in which β-alanine was administered (±95% CL: 2.2% ± 4.0%), but these results were inconclusive for performance enhancement (p = 0.32, effect size = 0.18, smallest worthwhile change = 56% beneficial). A possibly harmful effect was observed when cyclists were correctly informed that they had ingested a placebo (-1.0% ± 1.9%). Questionnaire data suggested that β-alanine ingestion resulted in evident sensory side effects and six cyclists reported placebo effects. Acute ingestion of β-alanine is not associated with improved 1-km TT performance in competitive cyclists. These findings are in contrast to the athlete's "belief" as cyclists reported improved energy and the ability to sustain a higher power output under conditions of β-alanine induced paresthesia.

  14. Paresthesia-Free High-Density Spinal Cord Stimulation for Postlaminectomy Syndrome in a Prescreened Population: A Prospective Case Series.

    Science.gov (United States)

    Sweet, Jennifer; Badjatiya, Anish; Tan, Daniel; Miller, Jonathan

    2016-04-01

    Spinal cord stimulation (SCS) traditionally is thought to require paresthesia, but there is evidence that paresthesia-free stimulation using high-density (HD) parameters might also be effective. The purpose of this study is to evaluate relative effectiveness of conventional, subthreshold HD, and sham stimulation on pain intensity and quality of life. Fifteen patients with response to conventional stimulation (60 Hz/350 μsec) were screened with a one-week trial of subthreshold HD (1200 Hz/200 μsec/amplitude 90% paresthesia threshold) and enrolled if there was at least 50% reduction on visual analog scale (VAS) for pain. Subjects were randomized into two groups and treated with four two-week periods of conventional, subthreshold HD, and sham stimulation in a randomized crossover design. Four of 15 patients responded to subthreshold HD stimulation. Mean VAS during conventional, subthreshold HD, and sham stimulation was 5.32 ± 0.63, 2.29 ± 0.41, and 6.31 ± 1.22, respectively. There was a significant difference in pain scores during the blinded crossover study of subthreshold HD vs. sham stimulation (p Paresthesia are not necessary for pain relief using commercially available SCS devices, and may actually increase attention to pain. Subthreshold HD SCS represents a viable alternative to conventional stimulation among patients who are confirmed to have a clinical response to it. © 2015 International Neuromodulation Society.

  15. SPEECH VISUALIZATION SISTEM AS A BASIS FOR SPEECH TRAINING AND COMMUNICATION AIDS

    Directory of Open Access Journals (Sweden)

    Oliana KRSTEVA

    1997-09-01

    Full Text Available One receives much more information through a visual sense than through a tactile one. However, most visual aids for hearing-impaired persons are not wearable because it is difficult to make them compact and it is not a best way to mask always their vision.Generally it is difficult to get the integrated patterns by a single mathematical transform of signals, such as a Foruier transform. In order to obtain the integrated pattern speech parameters should be carefully extracted by an analysis according as each parameter, and a visual pattern, which can intuitively be understood by anyone, must be synthesized from them. Successful integration of speech parameters will never disturb understanding of individual features, so that the system can be used for speech training and communication.

  16. Clinical Analysis about Diagnosis and Treatment of 86 Hand Paresthesia Cases Using MPS Theory and Pharmacopuncture Therapy

    Directory of Open Access Journals (Sweden)

    Sung-Won Oh

    2007-12-01

    Full Text Available Objectives : Hand paresthesia is common syndrome and the cause is more unknown than known reason. The Purpose of this study were investigated the effects of Myofacial Pain Syndrome theory to make diagnosis and treatment by Pharmacopuncture for the patients of hand paresthesia. Method : This study was carried out to established the clinical criteria of hand parethesia. The patients who had past history of diabeics, neuropathy induced by alcohol or drug were excluded, and 86 patients who had hand paresthesia related with unknown-reason was selected by the interview process. And the effects of Pharmacopuncture theory were analyzed using VAS score before and after treatment. Results and conclusions : 56.9% of unknown-reason patients are positive at diagnosis by MPS theory. While positive group decrease from 62.81±14.27 to 25.28±15.97, negative group decrease from 55.88±10.92 to 48.28±14.01 by VAS scores. Positive group was accordingly more effective than negative group. So diagnosis and treatment for hand numbness patients by MPS theory was useful in clinical.

  17. Multiple locations of nerve compression: an unusual cause of persistent lower limb paresthesia.

    Science.gov (United States)

    Ang, Chia-Liang; Foo, Leon Siang Shen

    2014-01-01

    A paucity of appreciation exists that the "double crush" phenomenon can account for persistent leg symptoms even after spinal neural decompression surgery. We present an unusual case of multiple locations of nerve compression causing persistent lower limb paresthesia in a 40-year old male patient. The patient's lower limb paresthesia was persistent after an initial spinal surgery to treat spinal lateral recess stenosis thought to be responsible for the symptoms. It was later discovered that he had peroneal muscle herniations that had caused superficial peroneal nerve entrapments at 2 separate locations. The patient obtained much symptomatic relief after decompression of the peripheral nerve. The "double crush" phenomenon and multiple levels of nerve compression should be considered when evaluating lower limb neurogenic symptoms, especially after spinal nerve root surgery. Copyright © 2014 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  18. DFT-Domain Based Single-Microphone Noise Reduction for Speech Enhancement

    DEFF Research Database (Denmark)

    C. Hendriks, Richard; Gerkmann, Timo; Jensen, Jesper

    As speech processing devices like mobile phones, voice controlled devices, and hearing aids have increased in popularity, people expect them to work anywhere and at any time without user intervention. However, the presence of acoustical disturbances limits the use of these applications, degrades...... their performance, or causes the user difficulties in understanding the conversation or appreciating the device. A common way to reduce the effects of such disturbances is through the use of single-microphone noise reduction algorithms for speech enhancement. The field of single-microphone noise reduction...

  19. The Effects of Macroglossia on Speech: A Case Study

    Science.gov (United States)

    Mekonnen, Abebayehu Messele

    2012-01-01

    This article presents a case study of speech production in a 14-year-old Amharic-speaking boy. The boy had developed secondary macroglossia, related to a disturbance of growth hormones, following a history of normal speech development. Perceptual analysis combined with acoustic analysis and static palatography is used to investigate the specific…

  20. How fast pain, numbness, and paresthesia resolves after lumbar nerve root decompression: a retrospective study of patient's self-reported computerized pain drawing.

    Science.gov (United States)

    Huang, Peng; Sengupta, Dilip K

    2014-04-15

    A single-center retrospective study. To compare the speed of recovery of different sensory symptoms, pain, numbness, and paresthesia, after lumbar nerve root decompression. Lumbar radiculopathy is characterized by different sensory symptoms like pain, numbness, and paresthesia, which may resolve at different rates after surgical decompression. Eighty-five cases with predominant lumbar radiculopathy treated surgically were reviewed. Oswestry Disability Index score, 36-Item Short Form Health Survey scores (Physical Component Summary and Mental Component Summary), and pain drawing at preoperative and at 6 weeks, 3 months, 6 months, and 1-year follow-up were reviewed. Recovery rate between different sensory symptoms were compared in all patients, and between the short-term compression (paresthesia; 28 (32.9%) had all these 3 component of sensory symptoms. Mean pain score improved fastest (55.3% at 6 wk); further resolution until 1 year was slow and not significant compared with each previous visit. Both numbness and paresthesia scores showed a trend of faster recovery during the initial 6-week period (20.5% and 24%, respectively); paresthesia recovery reached a plateau at 3 months postoperatively, but numbness continued a slow recovery until 1-year follow-up. Both Oswestry Disability Index score and Physical Component Summary scores (54.02 ± 1.87 and 26.29 ± 0.93, respectively, at baseline) improved significantly compared with each previous visits at 6 weeks and 3 months postoperatively, but further improvement was insignificant. Mental Component Summary showed a similar trend but smaller improvement. The short-term compression group had faster recovery of pain than the long-term compression group. In lumbar radiculopathy patients after surgical decompression, pain recovers fastest, in the first 6 weeks postoperatively, followed by paresthesia recovery that plateaus at 3 months postoperatively. Numbness recovers at a slower pace but continues until 1 year. 4.

  1. Neurologic complications of electrolyte disturbances and acid-base balance.

    Science.gov (United States)

    Espay, Alberto J

    2014-01-01

    Electrolyte and acid-base disturbances are common occurrences in daily clinical practice. Although these abnormalities can be readily ascertained from routine laboratory findings, only specific clinical correlates may attest as to their significance. Among a wide phenotypic spectrum, acute electrolyte and acid-base disturbances may affect the peripheral nervous system as arreflexic weakness (hypermagnesemia, hyperkalemia, and hypophosphatemia), the central nervous system as epileptic encephalopathies (hypomagnesemia, dysnatremias, and hypocalcemia), or both as a mixture of encephalopathy and weakness or paresthesias (hypocalcemia, alkalosis). Disabling complications may develop not only when these derangements are overlooked and left untreated (e.g., visual loss from intracranial hypertension in respiratory or metabolic acidosis; quadriplegia with respiratory insufficiency in hypermagnesemia) but also when they are inappropriately managed (e.g., central pontine myelinolisis when rapidly correcting hyponatremia; cardiac arrhythmias when aggressively correcting hypo- or hyperkalemia). Therefore prompt identification of the specific neurometabolic syndromes is critical to correct the causative electrolyte or acid-base disturbances and prevent permanent central or peripheral nervous system injury. This chapter reviews the pathophysiology, clinical investigations, clinical phenotypes, and current management strategies in disorders resulting from alterations in the plasma concentration of sodium, potassium, calcium, magnesium, and phosphorus as well as from acidemia and alkalemia. © 2014 Elsevier B.V. All rights reserved.

  2. Home Use of a Pyrethroid-Containing Pesticide and Facial Paresthesia in a Toddler: A Case Report

    Directory of Open Access Journals (Sweden)

    Alexandra Perkins

    2016-08-01

    Full Text Available Paresthesias have previously been reported among adults in occupational and non-occupational settings after dermal contact with pyrethroid insecticides. In this report, we describe a preverbal 13-month-old who presented to his primary care pediatrician with approximately 1 week of odd facial movements consistent with facial paresthesias. The symptoms coincided with a period of repeat indoor spraying at his home with a commercially available insecticide containing two active ingredients in the pyrethroid class. Consultation by the Northwest Pediatric Environmental Health Specialty Unit and follow-up by the Washington State Department of Health included urinary pyrethroid metabolite measurements during and after the symptomatic period, counseling on home clean up and use of safer pest control methods. The child’s symptoms resolved soon after home cleanup. A diagnosis of pesticide-related illness due to pyrethroid exposure was made based on the opportunity for significant exposure (multiple applications in areas where the child spent time, supportive biomonitoring data, and the consistency and temporality of symptom findings (paresthesias. This case underscores the vulnerability of children to uptake pesticides, the role of the primary care provider in ascertaining an exposure history to recognize symptomatic illness, and the need for collaborative medical and public health efforts to reduce significant exposures in children.

  3. How Can Paresthesia After Zygomaticomaxillary Complex Fracture Be Determined After Long-Term Follow-Up? A New and Quantitative Evaluation Method Using Current Perception Threshold Testing.

    Science.gov (United States)

    Okochi, Masayuki; Ueda, Kazuki; Mochizuki, Yasushi; Okochi, Hiromi

    2015-08-01

    The aims of the present study were to analyze the effectiveness of current perception threshold (CPT) testing to determine patients' minor paresthesia of the infraorbital region after open reduction and internal fixation (ORIF) for unilateral zygomaticomaxillary bone fracture (UZF) and to clarify which nerve fiber was related to the paresthesia. We conducted a retrospective cohort study of patients who had undergone ORIF after UZF. We also performed neurosensory testing for healthy volunteers who served as the control group. The predictor variables were the period of measurement of Semmes-Weinstein monofilament (S-W) testing and CPT testing (preoperatively and 1 and 5 years postoperatively), measurement side, and disease status (UZF or control). The outcome variables were paresthesia status of the infraorbital nerve region and the results of S-W and CPT testing in both UZF and control groups. The differences in the S-W and CPT values between the affected and unaffected sides in the UZF group and between the UZF and control groups were analyzed by t test (P paresthesia at 1 and 5 years postoperatively. At 5 years postoperatively, the S-W values in all patients showed normalization. From the results of CPT testing, only the A-β fiber function showed significant improvement at 5 years postoperatively. The CPT test was an effective sensory test for determining minor paresthesia that could not be detected using S-W testing. Paresthesia of the infraorbital nerve region was caused by the damaged A-δ and C fibers. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Effects of electrode positioning on perception threshold and paresthesia coverage in spinal cord stimulation

    NARCIS (Netherlands)

    Holsheimer, J.; Khan, Y.N.; Raza, S.S.; Khan, A.E.

    Objectives. This pilot study aims to validate the hypothesis that a smaller distance between SCS lead and spinal cord results in more extensive paresthesia and less energy consumption. Materials and Methods. After insertion of a percutaneous SCS lead in patients with chronic pain (condition A), a

  5. Speech comprehension difficulties in chronic tinnitus and its relation to hyperacusis

    Directory of Open Access Journals (Sweden)

    Veronika Vielsmeier

    2016-12-01

    Full Text Available AbstractObjectiveMany tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1 estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2 compare subjective reports of speech comprehension difficulties with objective measurements in a standardized speech comprehension test and to (3 explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram, as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and MethodsSpeech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessment (pure tone audiometry, tinnitus pitch and loudness matching, the Goettingen sentence test (in quiet for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments (How would you rate your ability to understand speech?; How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?. Results Subjectively reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation. 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test. Subjective speech comprehension complaints (both in general and in noisy environment were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated

  6. The effect of pulse width and contact configuration on paresthesia coverage in spinal cord stimulation

    NARCIS (Netherlands)

    Holsheimer, J.; Buitenweg, Jan R.; Das, John; de Sutter, Paul; Manola, L.; Nuttin, Bart

    Objective. To investigate the effect of stimulus pulsewidth (PW) and contact configuration (CC) on the area of paresthesia (PA), perception threshold (VPT), discomfort threshold (VDT) and usage range (UR) in spinal cord stimulation (SCS). Methods. Chronic pain patients were tested during a follow-up

  7. Characteristic findings on panoramic radiography and cone-beam CT to predict paresthesia after extraction of impacted third molar.

    Science.gov (United States)

    Harada, Nana; Beloor Vasudeva, Subash; Matsuda, Yukiko; Seki, Kenji; Kapila, Rishabh; Ishikawa, Noboru; Okano, Tomohiro; Sano, Tsukasa

    2015-01-01

    The purpose of this study was to compare findings on the relationship between impacted molar roots and the mandibular canal in panoramic and three-dimensional cone-beam CT (CBCT) images to identify those that indicated risk of postoperative paresthesia. The relationship between impacted molars and the mandibular canal was first classified using panoramic images. Only patients in whom the molar roots were either in contact with or superimposed on the canal were evaluated using CBCT. Of 466 patients examined using both panoramic and CBCT images, 280 underwent surgical extraction of an impacted molar, and 15 of these (5%) reported postoperative paresthesia. The spatial relationship between the impacted third molar root and the mandibular canal was determined by examining para-sagittal sections (lingual, buccal, inter-radicular, inferior, and combinations) obtained from the canal to the molar root and establishing the proximity of the canal to the molar root (in contact with or without loss of the cortical border and separate). The results revealed that darkening of the roots with interruption of the mandibular canal on panoramic radiographs and the inter-radicular position of the canal in CBCT images were characteristic findings indicative of risk of postoperative paresthesia. These results suggest that careful surgical intervention is required in patients with the above characteristics.

  8. Evaluation of pulsing magnetic field effects on paresthesia in multiple sclerosis patients, a randomized, double-blind, parallel-group clinical trial.

    Science.gov (United States)

    Afshari, Daryoush; Moradian, Nasrin; Khalili, Majid; Razazian, Nazanin; Bostani, Arash; Hoseini, Jamal; Moradian, Mohamad; Ghiasian, Masoud

    2016-10-01

    Evidence is mounting that magnet therapy could alleviate the symptoms of multiple sclerosis (MS). This study was performed to test the effects of the pulsing magnetic fields on the paresthesia in MS patients. This study has been conducted as a randomized, double-blind, parallel-group clinical trial during the April 2012 to October 2013. The subjects were selected among patients referred to MS clinic of Imam Reza Hospital; affiliated to Kermanshah University of Medical Sciences, Iran. Sixty three patients with MS were included in the study and randomly were divided into two groups, 35 patients were exposed to a magnetic pulsing field of 4mT intensity and 15-Hz frequency sinusoidal wave for 20min per session 2 times per week over a period of 2 months involving 16 sessions and 28 patients was exposed to a magnetically inactive field (placebo) for 20min per session 2 times per week over a period of 2 months involving 16 sessions. The severity of paresthesia was measured by the numerical rating scale (NRS) at 30, 60days. The study primary end point was NRS change between baseline and 60days. The secondary outcome was NRS change between baseline and 30days. Patients exposing to magnetic field showed significant paresthesia improvement compared with the group of patients exposing to placebo. According to our results pulsed magnetic therapy could alleviate paresthesia in MS patients .But trials with more patients and longer duration are mandatory to describe long-term effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Recurrent Spontaneous Paresthesia in the Upper Limb Could Be Due to Migraine: A Case Series.

    Science.gov (United States)

    Prakash, Sanjay; Rathore, Chaturbhuj; Makwana, Prayag; Rathod, Mitali

    2015-09-01

    Transient neurologic dysfunction is a characteristic feature of migraine. About 20% of migraineurs may experience various symptoms in the absence of any headache at one time or another. Visual auras are the most common auras of migraine, and migraine is considered as the most common cause of transient vision loss in young patients. Sensory auras are the second most common migrainous auras. However, the literature is silent for isolated sensory aura as a migraine equivalent. Herein we report 14 patients with recurrent episodic paresthesia in the limbs and other body parts. All patients fulfilled the diagnostic criteria of "typical aura without headache" of ICHD-3β. All patients were subjected to various investigations to rule out secondary causes. Ten patients received antimigraine drugs and all showed a positive response to therapy. Recurrent spontaneous paresthesia is quite common in the general population and many patients remain undiagnosed. We speculate that a subset of patients might be related to migrainous sensory auras. © 2015 American Headache Society.

  10. Efficacy of gabapentin versus diclofenac in the treatment of chest pain and paresthesia in patients with sternotomy.

    Science.gov (United States)

    Biyik, Ismail; Gülcüler, Metin; Karabiga, Murat; Ergene, Oktay; Tayyar, Nezih

    2009-10-01

    Chronic post-sternotomy chest pain and paresthesia (PCPP) are frequently seen and reduce the quality of life. We aimed to demonstrate the efficacy and safety of gabapentin compared with diclofenac in the treatment of PCPP and to elucidate the similarities of PCPP to neuropathic pain syndromes. The prospective, randomized, open-label, blinded end-point design of study was used. One hundred and ten patients having PCPP lasting three months or more were randomized to receive 800 mg/daily gabapentin (n=55) and 75 mg/daily diclofenac (n=55) for thirty days. All patients have undergone cardiac surgery and median sternotomy. The perception of pain or paresthesia was evaluated as 0--Normal (no pain or paresthesia), 1--Mild, 2--Moderate, 3--Severe at baseline and after thirty days of treatment. Recurrences were questioned after three months. Statistical analyses were performed using independent samples t, Chi-square, continuity correction, Fisher's exact, Mann Whitney U and Kruskal Wallis tests. In gabapentin group, mean pain and paresthesia scores regressed from 2.12+/- 0.76 to 0.54+/- 0.83 (pparesthesia scores regressed in diclofenac group from 1.93+/- 0.8 to 1.0+/- 1.13 (p<0.001) and from 1.76+/- 0.74 to 1.24+/- 0.96 (p=0.002), respectively. Although, both gabapentin and diclofenac were found to be effective without obvious side effects in the treatment of PCPP (p<0.001), gabapentin was found to be superior to diclofenac (p=0.001 and p<0.001, respectively). Adverse effects were seen in 7% of patients on gabapentin and 4% of patients on diclofenac. Results also showed that symptomatic relief with gabapentin lasts longer than diclofenac (p<0.001). Both gabapentin and diclofenac are effective in the treatment of chronic PCPP, without obvious side effects. However, gabapentin is found to be superior to diclofenac and its effects sustain longer. The results show that there may be some evidence in PCPP as a kind of neuropathic pain.

  11. The effect of pulse width and contact configuration on paresthesia coverage in spinal cord stimulation.

    Science.gov (United States)

    Holsheimer, Jan; Buitenweg, Jan R; Das, John; de Sutter, Paul; Manola, Ljubomir; Nuttin, Bart

    2011-05-01

    In spinal cord stimulation for the management of chronic, intractable pain, a satisfactory analgesic effect can be obtained only when the stimulation-induced paresthesias cover all painful body areas completely or partially. To investigate the effect of stimulus pulse width (PW) and contact configuration (CC) on the area of paresthesia (PA), perception threshold (VPT), discomfort threshold (VDT), and usage range (UR) in spinal cord stimulation. Chronic pain patients were tested during a follow-up visit. They were stimulated monopolarly and with the CC giving each patient the best analgesia. VPT, VDT, and UR were determined for PWs of 90, 210, and 450 microseconds. The paresthesia contours at VDT were drawn on a body map and digitized; PA was calculated; and its anatomic composition was described. The effects of PW and CC on PA, VPT, VDT, and UR were tested statistically. Twenty-four of 31 tests with low thoracic stimulation and 8 of 9 tests with cervical stimulation gave a significant extension of PA at increasing PW. In 14 of 18 tests (low thoracic), a caudal extension was obtained (primarily in L5-S2). In cervical stimulation the extension was predominantly caudal as well. In contrast to VPT and VDT, UR is not significantly different when stimulating with any CC. PA extends caudally with increasing PW. The mechanism includes that the larger and smaller dorsal column fibers have a different mediolateral distribution and that smaller dorsal column fibers have a smaller UR and can be activated only when PW is sufficiently large. A similar effect of CC on PA is unlikely as long as electrodes with a large intercontact distance are applied.

  12. Guillain-Barré Syndrome: A Variant Consisting of Facial Diplegia and Paresthesia with Left Facial Hemiplegia Associated with Antibodies to Galactocerebroside and Phosphatidic Acid.

    Science.gov (United States)

    Nishiguchi, Sho; Branch, Joel; Tsuchiya, Tsubasa; Ito, Ryoji; Kawada, Junya

    2017-10-02

    BACKGROUND A rare variant of Guillain-Barré syndrome (GBS) consists of facial diplegia and paresthesia, but an even more rare association is with facial hemiplegia, similar to Bell's palsy. This case report is of this rare variant of GBS that was associated with IgG antibodies to galactocerebroside and phosphatidic acid. CASE REPORT A 54-year-old man presented with lower left facial palsy and paresthesia of his extremities, following an upper respiratory tract infection. Physical examination confirmed lower left facial palsy and paresthesia of his extremities with hyporeflexia of his lower limbs and sensory loss of all four extremities. The differential diagnosis was between a variant of GBS and Bell's palsy. Following initial treatment with glucocorticoids followed by intravenous immunoglobulin (IVIG), his sensory abnormalities resolved. Serum IgG antibodies to galactocerebroside and phosphatidic acid were positive in this patient, but not other antibodies to glycolipids or phospholipids were found. Five months following discharge from hospital, his left facial palsy had improved. CONCLUSIONS A case of a rare variant of GBS is presented with facial diplegia and paresthesia and with unilateral facial palsy. This rare variant of GBS may which may mimic Bell's palsy. In this case, IgG antibodies to galactocerebroside and phosphatidic acid were detected.

  13. A case of mental nerve paresthesia due to dynamic compression of alveolar inferior nerve along an elongated styloid process.

    Science.gov (United States)

    Gooris, Peter J J; Zijlmans, Jan C M; Bergsma, J Eelco; Mensink, Gertjan

    2014-07-01

    Spontaneous paresthesia of the mental nerve is considered an ominous clinical sign. Mental nerve paresthesia has also been referred to as numb chin syndrome. Several potentially different factors have been investigated for their role in interfering with the inferior alveolar nerve (IAN) and causing mental nerve neuropathy. In the present case, the patient had an elongated calcified styloid process that we hypothesized had caused IAN irritation during mandibular movement. This eventually resulted in progressive loss of sensation in the mental nerve region. To our knowledge, this dynamic irritation, with complete recovery after resection of the styloid process, has not been previously reported. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  14. Marital conflict and adjustment: speech nonfluencies in intimate disclosure.

    Science.gov (United States)

    Paul, E L; White, K M; Speisman, J C; Costos, D

    1988-06-01

    Speech nonfluency in response to questions about the marital relationship was used to assess anxiety. Subjects were 31 husbands and 31 wives, all white, college educated, from middle- to lower-middle-class families, and ranging from 20 to 30 years of age. Three types of nonfluencies were coded: filled pauses, unfilled pauses, and repetitions. Speech-disturbance ratios were computed by dividing the sum of speech nonfluencies by the total words spoken. The results support the notion that some issues within marriage are more sensitive and/or problematic than others, and that, in an interview situation, gender interacts with question content in the production of nonfluencies.

  15. Attitudes toward speech disorders: sampling the views of Cantonese-speaking Americans.

    Science.gov (United States)

    Bebout, L; Arthur, B

    1997-01-01

    Speech-language pathologists who serve clients from cultural backgrounds that are not familiar to them may encounter culturally influenced attitudinal differences. A questionnaire with statements about 4 speech disorders (dysfluency, cleft pallet, speech of the deaf, and misarticulations) was given to a focus group of Chinese Americans and a comparison group of non-Chinese Americans. The focus group was much more likely to believe that persons with speech disorders could improve their own speech by "trying hard," was somewhat more likely to say that people who use deaf speech and people with cleft palates might be "emotionally disturbed," and generally more likely to view deaf speech as a limitation. The comparison group was more pessimistic about stuttering children's acceptance by their peers than was the focus group. The two subject groups agreed about other items, such as the likelihood that older children with articulation problems are "less intelligent" than their peers.

  16. Speech rate in Parkinson's disease: A controlled study.

    Science.gov (United States)

    Martínez-Sánchez, F; Meilán, J J G; Carro, J; Gómez Íñiguez, C; Millian-Morell, L; Pujante Valverde, I M; López-Alburquerque, T; López, D E

    2016-09-01

    Speech disturbances will affect most patients with Parkinson's disease (PD) over the course of the disease. The origin and severity of these symptoms are of clinical and diagnostic interest. To evaluate the clinical pattern of speech impairment in PD patients and identify significant differences in speech rate and articulation compared to control subjects. Speech rate and articulation in a reading task were measured using an automatic analytical method. A total of 39 PD patients in the 'on' state and 45 age-and sex-matched asymptomatic controls participated in the study. None of the patients experienced dyskinesias or motor fluctuations during the test. The patients with PD displayed a significant reduction in speech and articulation rates; there were no significant correlations between the studied speech parameters and patient characteristics such as L-dopa dose, duration of the disorder, age, and UPDRS III scores and Hoehn & Yahr scales. Patients with PD show a characteristic pattern of declining speech rate. These results suggest that in PD, disfluencies are the result of the movement disorder affecting the physiology of speech production systems. Copyright © 2014 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. Concordant pressure paresthesia during interlaminar lumbar epidural steroid injections correlates with pain relief in patients with unilateral radicular pain.

    Science.gov (United States)

    Candido, Kenneth D; Rana, Maunak V; Sauer, Ruben; Chupatanakul, Lalida; Tharian, Antony; Vasic, Vladimir; Knezevic, Nebojsa Nick

    2013-01-01

    Transforaminal and interlaminar epidural steroid injections are commonly used interventional pain management procedures in the treatment of radicular low back pain. Even though several studies have shown that transforaminal injections provide enhanced short-term outcomes in patients with radicular and low back pain, they have also been associated with a higher incidence of unintentional intravascular injection and often dire consequences than have interlaminar injections. We compared 2 different approaches, midline and lateral parasagittal, of lumbar interlaminar epidural steroid injection (LESI) in patients with unilateral lumbosacral radiculopathic pain. We also tested the role of concordant pressure paresthesia occurring during LESI as a prognostic factor in determining the efficacy of LESI. Prospective, randomized, blinded study. Pain management center, part of a teaching-community hospital in a major metropolitan US city. After Institutional Review Board approval, 106 patients undergoing LESI for radicular low back pain were randomly assigned to one of 2 groups (53 patients each) based on approach: midline interlaminar (MIL) and lateral parasagittal interlaminar (PIL). Patients were asked to grade any pressure paresthesia as occurring ipsilaterally or contralaterally to their "usual and customary pain," or in a distribution atypical of their daily pain. Other variables such as: the Oswestry Disability Index questionnaire, pain scores at rest and during movement, use of pain medications, etc. were recorded 20 minutes before the procedure, and on days 1, 7, 14, 21, 28, 60, 120, 180 and 365 after the injection. Results of this study showed statistically and clinically significant pain relief in patients undergoing LESI by both the MIL and PIL approaches. Patients receiving LESI using the lateral parasagittal approach had statistically and clinically longer pain relief then patients receiving LESI via a midline approach. They also had slightly better quality of

  18. Risk factors for unpleasant paresthesiae induced by paresthesiae - producing deep brain stimulation Fatores de risco para parestesia dolorosa induzida por estimulação cerebral profuda em sítios produtores de parestesia

    Directory of Open Access Journals (Sweden)

    Osvaldo Vilela Filho

    1996-03-01

    Full Text Available Paresthesiae-producing deep brain stimulation (stimulation of ventrocaudal nucleus - VC, medial lemniscus - ML or internal capsule - IC is one of the few procedures to treat the steady element of neural injury pain (NIP currently available. Reviewing the first 60 patients with NIP submitted to deep brain stimulation (DBS from 1978 to 1991 at the Division of Neurosurgery, Toronto Hospital, University of Toronto, we observed that 6 patients complained of unpleasant paresthesiae with paresthesiae-producing DBS, preventing permanent electrode implantation in all of them. Such patients accounted for 15% of the failures (6 out of 40 failures in our series. In an attempt to improve patient selection, we reviewed our patients considering a number of parameters in order to determine risk factors for unpleasant paresthesiae elicited by paresthesiae-producing DBS. The results showed that this response happenned only in patients with brain central pain complaining of evoked pain, secondary to a supratentorial lesion. Age, sex, duration of pain, quality of the steady pain, size of the causative lesion and site (VC,ML,IC and type (micro or macroelectrode of surgical exploration were not important factors. Unpleasant paresthesiae in response to dorsal column stimulation, restricted thalamic lesion on computed tomography and the occurrence of associated intermittent pain were considered major risk factors in this subset of patients and the presence of cold allodynia or hyperpathia in isolation and the absence of sensory loss were considered minor risk factors. It is our hope that the criteria here established will improve patient selection and so, the overall results of DBS.A estimulação cerebral profunda (ECP de sítios cuja estimulação elicita parestesia (núcleo talâmico ventrocaudal - VC, lemnisco medial - LM e cápsula interna - CI é um dos poucos métodos atualmente disponíveis para o tratamento do elemento constante da dor por injúria neural (DIN

  19. Does the Surgical Management of the Intercostobrachial Nerve Influence the Postoperatory Paresthesia of the Upper Limb and Life Quality in Breast Cancer Patients?

    Science.gov (United States)

    Orsolya, Hankó-Bauer; Coros, Marius Florin; Stolnicu, Simona; Naznean, Adrian; Georgescu, Rares

    2017-01-01

    The aim of our study was to evaluate the extent to which the preservation or the section of the intercostobrachial nerve (ICBN) influences the development of postoperatoryparesthesia and to assess whether the development of paresthesiamay change the patient's life quality after surgical treatment for breast carcinoma. We performed a nonrandomized retrospective study including 100 patients who underwent axillary lymph node dissection for infiltrating breast carcinoma associated with axillary lymph node metastases. Using a questionnaire we studied the patients general life quality in the postoperative period. For the statistical analysis we used GraphPad Prism, Fisher'™s exact test and Chi square test. Results: 100 patients were included in our study with a mean age of 59.7 years. In 50 cases, the ICBN was preserved (Group 1),while in the remaining 50 cases the ICBN was sectioned during surgery (Group 2). Significantly more patients from Group 2 complained about postoperative paresthesia (p=0.026). In our series, the management of the ICBN cannot be significantly correlated with the impairment of the patients daily activities (p=0.2), sleeping cycle (p=0.2), and general life quality after surgery (p=0.67). We can conclude that the management of ICBN has a great influence on the development of postoperative paresthesia. Although the paresthesia does not have a negative effect on the patient'™s life quality in the postoperative period, in our opinion it is important to preserve the ICBN in order to prevent postoperative paresthesia. Celsius.

  20. Sensory disturbance, CT, and somatosensory evoked potentials in thalamic hemorrhages

    International Nuclear Information System (INIS)

    Koga, Hisanobu; Miyazaki, Takayoshi; Miyazaki, Hisaya

    1985-01-01

    Thalamic hemorrhages often lead to sensory disturbances. However, no effective method for the evaluation of their prognoses has yet been clinically utilized. The somatosensory evoked potential (SEP) has been reported as an effective method, but it remains controversial. A CT scan is eminently suitable for determining the size and position of the hemorrhage. However, the correlation between the localization of the hematoma on the CT scan and the sensory distrubance has not been investigated fully. The authors selected 20 cases with the chronic stage of a thalamic hemorrhage. Each one was clinically evaluated as to sensory disturbance; they were then classified into the following five groups: Group 1: no sensory deficit (3 cases); Group 2: complete recovery from initial deficit (2 cases); Group 3: mild hypesthesia (5 cases); Group 4: severe hypesthesia (5 cases), and Group 5: paresthesia or dysesthesia (5 cases). Also, the CT scan was investigated with regard to the localization of the hematoma and the SEP. We could thus find a characteristic pattern in each group. The results may be summarized as follows. 1. The correlation between the degree of the sensory disturbance and the size and expansion of the hematoma was clearly detected. Especially, the most severe sensory disturbance was found in the hematoma extending to the lateral nuclear and ventral nuclear regions. 2. In Group 1 and 2, each SEP component (N 1 N 2 N 3 ) was shown to be normal. In Group 3, SEP components could be detected, but not completely. In Group 4, no components at all could be found. 3. In Group 5, all cases were small hematoma localized in the lateral nuclear region of the thalamus, while the N 3 components were prolonged on the SEP findings. The authors demonstrate the results and discuss the correlation between the sensory disturbance and the CT or SEP findings. (author)

  1. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...-Speech Services for Individuals with Hearing and Speech Disabilities, Report and Order (Order), document...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...

  2. Surgical improvement of speech disorder caused by amyotrophic lateral sclerosis.

    Science.gov (United States)

    Saigusa, Hideto; Yamaguchi, Satoshi; Nakamura, Tsuyoshi; Komachi, Taro; Kadosono, Osamu; Ito, Hiroyuki; Saigusa, Makoto; Niimi, Seiji

    2012-12-01

    Amyotrophic lateral sclerosis (ALS) is a progressive debilitating neurological disease. ALS disturbs the quality of life by affecting speech, swallowing and free mobility of the arms without affecting intellectual function. It is therefore of significance to improve intelligibility and quality of speech sounds, especially for ALS patients with slowly progressive courses. Currently, however, there is no effective or established approach to improve speech disorder caused by ALS. We investigated a surgical procedure to improve speech disorder for some patients with neuromuscular diseases with velopharyngeal closure incompetence. In this study, we performed the surgical procedure for two patients suffering from severe speech disorder caused by slowly progressing ALS. The patients suffered from speech disorder with hypernasality and imprecise and weak articulation during a 6-year course (patient 1) and a 3-year course (patient 2) of slowly progressing ALS. We narrowed bilateral lateral palatopharyngeal wall at velopharyngeal port, and performed this surgery under general anesthesia without muscle relaxant for the two patients. Postoperatively, intelligibility and quality of their speech sounds were greatly improved within one month without any speech therapy. The patients were also able to generate longer speech phrases after the surgery. Importantly, there was no serious complication during or after the surgery. In summary, we performed bilateral narrowing of lateral palatopharyngeal wall as a speech surgery for two patients suffering from severe speech disorder associated with ALS. With this technique, improved intelligibility and quality of speech can be maintained for longer duration for the patients with slowly progressing ALS.

  3. Paresthesia: A Review of Its Definition, Etiology and Treatments in View of the Traditional Medicine.

    Science.gov (United States)

    Emami, Seyed Ahmad; Sahebkar, Amirhossein; Javadi, Behjat

    2016-01-01

    To search major Islamic Traditional Medicine (ITM) textbooks for definition, etiology and medicinal plants used to manage 'khadar' or 'paresthesia', a common sensory symptom of multiple sclerosis (MS) and peripheral neuropathies. In addition, the conformity of the efficacy of ITM-suggested plants with the findings from modern pharmacological research on MS will be discussed. Data on the medicinal plants used to treat 'khadar' were obtained from major ITM texts. A detailed search in PubMed, ScienceDirect, Scopus and Google Scholar databases was performed to confirm the effects of ITM-mentioned medicinal plants on MS in view of identified pharmacological actions. Moringa oleifera Lam., Aloe vera (L.) Burm.f., Euphorbia species, Citrullus colocynthis (L.) Schrad., and Costus speciosus (Koen ex. Retz) Sm. are among the most effective ITM plants for the management of 'khadar'. Recent experimental evidence confirms the effectiveness of the mentioned plants in ameliorating MS symptoms. Moreover, according to ITM, prolonged exposure to cold and consuming foodstuff with cold temperament might be involved in the etiopathogenesis of MS. The use of traditional knowledge can help finding neglected risk factors as well as effective and safe therapeutic approaches, phytomedicines and dietary habits for the management of paresthesia and related disorders such as MS.

  4. Comparison between Steroid Injection and Stretching Exercise on the Scalene of Patients with Upper Extremity Paresthesia: Randomized Cross-Over Study.

    Science.gov (United States)

    Kim, Yong Wook; Yoon, Seo Yeon; Park, Yongbum; Chang, Won Hyuk; Lee, Sang Chul

    2016-03-01

    To compare the therapeutic effects on upper extremity paresthesia of intra-muscular steroid injections into the scalene muscle with those of stretching exercise only. Twenty patients with upper extremity paresthesia who met the criteria were recruited to participate in this single-blind, crossover study. Fourteen of 20 patients were female. The average age was 45.0 ± 10.5 years and duration of symptom was 12.2 ± 8.7 months. Each participant completed one injection and daily exercise program for 2 weeks. After randomization, half of all patients received ultrasound-guided injection of scalene muscles before exercise, while the other was invested for the other patients. After two weeks, there was a significant decrease of the visual analog scale score of treatment effect compared with baseline in both groups (6.90 to 2.85 after injection and 5.65 to 4.05 after stretching exercise, p50% reduction in post-treatment visual analog scale, was 18 of 20 (90.0%) after injection, compared to 5 of 20 (25.0%) after stretching exercise. There were no cases of unintended brachial plexus block after injection. Ultrasound-guided steroid injection or stretching exercise of scalene muscles led to reduced upper extremity paresthesia in patients who present with localized tenderness in the scalene muscle without electrodiagnostic test abnormalities, although injection treatment resulted in more improvements. The results suggest that symptoms relief might result from injection into the muscle alone not related to blockade of the brachial plexus.

  5. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  6. Asthma, hay fever, and food allergy are associated with caregiver-reported speech disorders in US children.

    Science.gov (United States)

    Strom, Mark A; Silverberg, Jonathan I

    2016-09-01

    Children with asthma, hay fever, and food allergy may have several factors that increase their risk of speech disorder, including allergic inflammation, ADD/ADHD, and sleep disturbance. However, few studies have examined a relationship between asthma, allergic disease, and speech disorder. We sought to determine whether asthma, hay fever, and food allergy are associated with speech disorder in children and whether disease severity, sleep disturbance, or ADD/ADHD modified such associations. We analyzed cross-sectional data on 337,285 children aged 2-17 years from 19 US population-based studies, including the 1997-2013 National Health Interview Survey and the 2003/4 and 2007/8 National Survey of Children's Health. In multivariate models, controlling for age, demographic factors, healthcare utilization, and history of eczema, lifetime history of asthma (odds ratio [95% confidence interval]: 1.18 [1.04-1.34], p = 0.01), and one-year history of hay fever (1.44 [1.28-1.62], p speech disorder. Children with current (1.37 [1.15-1.59] p = 0.0003) but not past (p = 0.06) asthma had increased risk of speech disorder. In one study that assessed caregiver-reported asthma severity, mild (1.58 [1.20-2.08], p = 0.001) and moderate (2.99 [1.54-3.41], p speech disorder; however, severe asthma was associated with the highest odds of speech disorder (5.70 [2.36-13.78], p = 0.0001). Childhood asthma, hay fever, and food allergy are associated with increased risk of speech disorder. Future prospective studies are needed to characterize the associations. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. The Particularities of the Monologue Speech Type Manifestations in Stuttering Schoolchildren with Tatar-Russian Bilingualism Compared to the Normality

    Science.gov (United States)

    Osipovskaya, Marina P.; Sharifzyanova, Kadriya Sh.; Zamaletdinova, Zalfira I.

    2016-01-01

    Actuality of studying of an issue on specific manifestations of a monologue speech type in bilingual schoolchildren with stutter has been stipulated by the necessity of elaboration of a constituent concept on central mechanisms underlying this kind of communication disorder, on the nature of disturbances of speech formation mechanisms in the…

  8. Auditory-perceptual speech analysis in children with cerebellar tumours: a long-term follow-up study.

    Science.gov (United States)

    De Smet, Hyo Jung; Catsman-Berrevoets, Coriene; Aarsen, Femke; Verhoeven, Jo; Mariën, Peter; Paquier, Philippe F

    2012-09-01

    Mutism and Subsequent Dysarthria (MSD) and the Posterior Fossa Syndrome (PFS) have become well-recognized clinical entities which may develop after resection of cerebellar tumours. However, speech characteristics following a period of mutism have not been documented in much detail. This study carried out a perceptual speech analysis in 24 children and adolescents (of whom 12 became mute in the immediate postoperative phase) 1-12.2 years after cerebellar tumour resection. The most prominent speech deficits in this study were distorted vowels, slow rate, voice tremor, and monopitch. Factors influencing long-term speech disturbances are presence or absence of postoperative PFS, the localisation of the surgical lesion and the type of adjuvant treatment. Long-term speech deficits may be present up to 12 years post-surgery. The speech deficits found in children and adolescents with cerebellar lesions following cerebellar tumour surgery do not necessarily resemble adult speech characteristics of ataxic dysarthria. Copyright © 2012 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  9. Neurophysiological evidence of methylmercury neurotoxicity

    DEFF Research Database (Denmark)

    Murata, Katsuyuki; Grandjean, Philippe; Dakeishi, Miwako

    2007-01-01

    neurotoxicity and to examine the usefulness of those measures. METHODS: The reports addressing both neurophysiological measures and methylmercury exposure in humans were identified and evaluated. RESULTS: The neurological signs and symptoms of MD included paresthesias, constriction of visual fields, impairment...... disease (MD; methylmercury poisoning). In recent years, some of these methods have been used for the risk assessment of low-level methylmercury exposure in asymptomatic children. The objectives of this article were to present an overview of neurophysiological findings involved in methylmercury...... of hearing and speech, mental disturbances, excessive sweating, and hypersalivation. Neuropathological lesions involved visual, auditory, and post- and pre-central cortex areas. Neurophysiological changes involved in methylmercury, as assessed by EPs and HRV, were found to be in accordance with both clinical...

  10. Clinical Outcomes of 1 kHz Subperception Spinal Cord Stimulation in Implanted Patients With Failed Paresthesia-Based Stimulation: Results of a Prospective Randomized Controlled Trial.

    Science.gov (United States)

    North, James M; Hong, Kyung-Soo Jason; Cho, Philip Young

    2016-10-01

    Pain relief via spinal cord stimulation (SCS) has historically revolved around producing paresthesia to replace pain, with success measured by the extent of paresthesia-pain overlap. In a recent murine study, by Shechter et al., showed the superior efficacy of high frequency SCS (1 kHz and 10 kHz) at inhibiting the effects of mechanical hypersensitivity compared to sham or 50 Hz stimulation. In the same study, authors report there were no differences in efficacy between 1 kHz and 10 kHz delivered at subperception stimulation strength (80% of motor threshold). Therefore, we designed a randomized, 2 × 2 crossover study of low frequency supra-perception SCS vs. subperception SCS at 1 kHz frequency in order to test whether subperception stimulation at 1 kHz was sufficient to provide effective pain relief in human subjects. Twenty-two subjects with SCS, and inadequate pain relief based on numeric pain rating scale (NPRS) scores (>5) were enrolled, and observed for total of seven weeks (three weeks of treatment, one week wash off, and another three weeks of treatment). Subjects were asked to rate their pain on NPRS as a primary efficacy variable, and complete the Oswestry Disability Index (ODI) and Patient's Global Impression of Change (PGIC) as secondary outcome measures. Out of 22 subjects that completed the study, 21 subjects (95%) reported improvements in average, best, and worst pain NPRS scores. All NPRS scores were significantly lower with subperception stimulation compared to paresthesia-based stimulation (p paresthesia based stimulation on ODI scores (p = 3.9737 × 10 -5 ) and PGIC scores (p = 3.0396 × 10 -5 ). © 2016 International Neuromodulation Society.

  11. Apraxia of speech associated with an infarct in the precentral gyrus of the insula

    International Nuclear Information System (INIS)

    Nagao, M.; Komori, T.; Isozaki, E.; Hirai, S.; Takeda, K.

    1999-01-01

    It has been postulated that the precentral gyrus in the left insula is responsible for co-ordination of speech. We report a paitent with this disturbance who showed an acute infarct limited to this region. (orig.)

  12. 49 CFR 391.43 - Medical examination; certificate of physical examination.

    Science.gov (United States)

    2010-10-01

    .... Neurological. Note impaired equilibrium, coordination, or speech pattern; paresthesia; asymmetric deep tendon..., determine whether prehension and power grasp are sufficient to enable the driver to maintain steering wheel...

  13. The impact of cochlear implantation on speech understanding, subjective hearing performance, and tinnitus perception in patients with unilateral severe to profound hearing loss.

    Science.gov (United States)

    Távora-Vieira, Dayse; Marino, Roberta; Acharya, Aanand; Rajan, Gunesh P

    2015-03-01

    This study aimed to determine the impact of cochlear implantation on speech understanding in noise, subjective perception of hearing, and tinnitus perception of adult patients with unilateral severe to profound hearing loss and to investigate whether duration of deafness and age at implantation would influence the outcomes. In addition, this article describes the auditory training protocol used for unilaterally deaf patients. This is a prospective study of subjects undergoing cochlear implantation for unilateral deafness with or without associated tinnitus. Speech perception in noise was tested using the Bamford-Kowal-Bench speech-in-noise test presented at 65 dB SPL. The Speech, Spatial, and Qualities of Hearing Scale and the Abbreviated Profile of Hearing Aid Benefit were used to evaluate the subjective perception of hearing with a cochlear implant and quality of life. Tinnitus disturbance was measured using the Tinnitus Reaction Questionnaire. Data were collected before cochlear implantation and 3, 6, 12, and 24 months after implantation. Twenty-eight postlingual unilaterally deaf adults with or without tinnitus were implanted. There was a significant improvement in speech perception in noise across time in all spatial configurations. There was an overall significant improvement on the subjective perception of hearing and quality of life. Tinnitus disturbance reduced significantly across time. Age at implantation and duration of deafness did not influence the outcomes significantly. Cochlear implantation provided significant improvement in speech understanding in challenging situations, subjective perception of hearing performance, and quality of life. Cochlear implantation also resulted in reduced tinnitus disturbance. Age at implantation and duration of deafness did not seem to influence the outcomes.

  14. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  15. Apraxia of speech associated with an infarct in the precentral gyrus of the insula

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, M.; Komori, T.; Isozaki, E.; Hirai, S. [Department of Neurology, Tokyo Metropolitan Neurological Hospital, Tokyo (Japan); Takeda, K. [Department of Neuropsychology, Tokyo Metropolitan Institute for Neuroscience, Tokyo (Japan)

    1999-05-01

    It has been postulated that the precentral gyrus in the left insula is responsible for co-ordination of speech. We report a paitent with this disturbance who showed an acute infarct limited to this region. (orig.) With 1 fig., 3 refs.

  16. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  17. Speech and orthodontic appliances: a systematic literature review.

    Science.gov (United States)

    Chen, Junyu; Wan, Jia; You, Lun

    2018-01-23

    Various types of orthodontic appliances can lead to speech difficulties. However, speech difficulties caused by orthodontic appliances have not been sufficiently investigated by an evidence-based method. The aim of this study is to outline the scientific evidence and mechanism of the speech difficulties caused by orthodontic appliances. Randomized-controlled clinical trials (RCT), controlled clinical trials, and cohort studies focusing on the effect of orthodontic appliances on speech were included. A systematic search was conducted by an electronic search in PubMed, EMBASE, and the Cochrane Library databases, complemented by a manual search. The types of orthodontic appliances, the affected sounds, and duration period of the speech disturbances were extracted. The ROBINS-I tool was applied to evaluate the quality of non-randomized studies, and the bias of RCT was assessed based on the Cochrane Handbook for Systematic Reviews of Interventions. No meta-analyses could be performed due to the heterogeneity in the study designs and treatment modalities. Among 448 screened articles, 13 studies were included (n = 297 patients). Different types of orthodontic appliances such as fixed appliances, orthodontic retainers and palatal expanders could influence the clarity of speech. The /i/, /a/, and /e/ vowels as well as /s/, /z/, /l/, /t/, /d/, /r/, and /ʃ/ consonants could be distorted by appliances. Although most speech impairments could return to normal within weeks, speech distortion of the /s/ sound might last for more than 3 months. The low evidence level grading and heterogeneity were the two main limitations in this systematic review. Lingual fixed appliances, palatal expanders, and Hawley retainers have an evident influence on speech production. The /i/, /s/, /t/, and /d/ sounds are the primarily affected ones. The results of this systematic review should be interpreted with caution and more high-quality RCTs with larger sample sizes and longer follow-up periods are

  18. Structural analysis of a speech disorder of children with a mild mental retardation

    Directory of Open Access Journals (Sweden)

    Franc Smole

    2004-05-01

    Full Text Available The aim of this research was to define the structure of speech disorder of children with a mild mental retardation. 100 subjects were chosen among pupils from the 1st to the 4th grade of elementary school who were under logopaedic treatment. To determine speech comprehension Reynell's developmental scales were used and for evaluation of speech articulation the Three-position test for articulation evaluation. With the Bender test we determined a child's mental age as well as defined the signs of psychological disfunction of organic nature. For the field of phonological consciousness a Test of reading and writing disturbances was applied. Speech fluency was evaluated by the Riley test. Evaluation scales were adapted for determining speech-language levels and motor skills of speech organs and hands. Data on results in psychological test and on the family was summed up from the diagnostic treatment guidance documents. Social behaviour in school was evaluated by their teachers. Six factors which hierarchicallydefine the structure of speech disorder were determined by the factor analysis. We found out that signs of a child's brain lesion are the factor which has the most influence on a child's mental age. The results of this research might be helpful to logopaedists in determining a logopaedic treatment for children with a mild mental retardation.

  19. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  20. Speech graphs provide a quantitative measure of thought disorder in psychosis.

    Science.gov (United States)

    Mota, Natalia B; Vasconcelos, Nivaldo A P; Lemos, Nathalia; Pieretti, Ana C; Kinouchi, Osame; Cecchi, Guillermo A; Copelli, Mauro; Ribeiro, Sidarta

    2012-01-01

    Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS) reached only 62.5% of sensitivity and specificity. The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.

  1. Speech graphs provide a quantitative measure of thought disorder in psychosis.

    Directory of Open Access Journals (Sweden)

    Natalia B Mota

    Full Text Available BACKGROUND: Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. METHODOLOGY/PRINCIPAL FINDINGS: To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS reached only 62.5% of sensitivity and specificity. CONCLUSIONS/SIGNIFICANCE: The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.

  2. MMSE Estimator for Children’s Speech with Car and Weather Noise

    Science.gov (United States)

    Sayuthi, V.

    2018-04-01

    Previous research mentioned that most people need and use vehicles for various purposes, in this recent time and future, as a means of traveling. Many ways can be done in a vehicle, such as for enjoying entertainment, and doing work, so vehicles not just only as a means of traveling. In this study, we will examine the children’s speech from a girl in the vehicle that affected by noise disturbances from the sound source of car noise and the weather sound noise around it, in this case, the rainy weather noise. Vehicle sounds may be from car engine or car air conditioner. The minimum mean square error (MMSE) estimator is used as an attempt to obtain or detect the children’s clear speech by representing simulation research as random process signal that factored by the autocorrelation of both the child’s voice and the disturbance noise signal. This MMSE estimator can be considered as wiener filter as the clear sound are reconstructed again. We expected that the results of this study can help as the basis for development of entertainment or communication technology for passengers of vehicles in the future, particularly using MMSE estimators.

  3. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  4. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  5. Speech impairment in Down syndrome: a review.

    Science.gov (United States)

    Kent, Ray D; Vorperian, Houri K

    2013-02-01

    This review summarizes research on disorders of speech production in Down syndrome (DS) for the purposes of informing clinical services and guiding future research. Review of the literature was based on searches using MEDLINE, Google Scholar, PsycINFO, and HighWire Press, as well as consideration of reference lists in retrieved documents (including online sources). Search terms emphasized functions related to voice, articulation, phonology, prosody, fluency, and intelligibility. The following conclusions pertain to four major areas of review: voice, speech sounds, fluency and prosody, and intelligibility. The first major area is voice. Although a number of studies have reported on vocal abnormalities in DS, major questions remain about the nature and frequency of the phonatory disorder. Results of perceptual and acoustic studies have been mixed, making it difficult to draw firm conclusions or even to identify sensitive measures for future study. The second major area is speech sounds. Articulatory and phonological studies show that speech patterns in DS are a combination of delayed development and errors not seen in typical development. Delayed (i.e., developmental) and disordered (i.e., nondevelopmental) patterns are evident by the age of about 3 years, although DS-related abnormalities possibly appear earlier, even in infant babbling. The third major area is fluency and prosody. Stuttering and/or cluttering occur in DS at rates of 10%-45%, compared with about 1% in the general population. Research also points to significant disturbances in prosody. The fourth major area is intelligibility. Studies consistently show marked limitations in this area, but only recently has the research gone beyond simple rating scales.

  6. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  7. Music and Speech Perception in Children Using Sung Speech.

    Science.gov (United States)

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  8. Apraxia of Speech

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  9. The effects of acute tryptophan depletion on speech and behavioural mimicry in individuals at familial risk for depression

    NARCIS (Netherlands)

    Hogenelst, Koen; Sarampalis, Anastasios; Leander, N. Pontus; Müller, Barbara C.N.; Schoevers, Robert A.; aan het Rot, Marije

    Major depressive disorder (MDD) has been associated with abnormalities in speech and behavioural mimicry. These abnormalities may contribute to the impairments in interpersonal functioning that are often seen in MDD patients. MDD has also been associated with disturbances in the brain serotonin

  10. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  11. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  12. Speech Production and Speech Discrimination by Hearing-Impaired Children.

    Science.gov (United States)

    Novelli-Olmstead, Tina; Ling, Daniel

    1984-01-01

    Seven hearing impaired children (five to seven years old) assigned to the Speakers group made highly significant gains in speech production and auditory discrimination of speech, while Listeners made only slight speech production gains and no gains in auditory discrimination. Combined speech and auditory training was more effective than auditory…

  13. Clinical Analysis about Treatment of Myofascial Pain Syndrome(MPS with Sweet Bee Venom on Hand Paresthesia based on Thoracic Outlet Syndrome

    Directory of Open Access Journals (Sweden)

    Sung-Won Oh

    2010-06-01

    Full Text Available Objectives: The objective of this study was to compare the effects of Sweet Bee Venom(Sweet BV Therapy between the hand paresthesia patients with Osteoporosis and without Osteoporosis. Methods: This study was carried out to established the clinical criteria of hand parethesia. The patients who had past history of diabeics, neuropathy induced by alcohol or drug and was positive on Myofacial Pain Syndrome Theory were excluded. 32 patients who had hand paresthesia related with unknown-reason was selected by the interview process. And the effects of treatment were analyzed using VAS score before treatment, after treatment, after 1 month and after 3 months. Results and conclusion: After treatment, While Osteoporosis group decrease from 64.81±17.81 to 27.21±17.32, Non-Osteoporosis group decrease from 58.76±11.43 to 24.74±13.81 by VAS scores. and After 3 months, While Osteoporosis group increase from 27.21±17.32 to 54.96±19.40, Non-Osteoporosis group increase from 24.74±13.81 to 32.43±15.57. Non-Osteoporosis group was accordingly more effective than Osteoporosis group after 3 months. So Sweet BV therapy for hand numbness patients without Osteoporosis was effective than patients with Osteoporosis.

  14. Stuttering Frequency, Speech Rate, Speech Naturalness, and Speech Effort During the Production of Voluntary Stuttering.

    Science.gov (United States)

    Davidow, Jason H; Grossman, Heather L; Edge, Robin L

    2018-05-01

    Voluntary stuttering techniques involve persons who stutter purposefully interjecting disfluencies into their speech. Little research has been conducted on the impact of these techniques on the speech pattern of persons who stutter. The present study examined whether changes in the frequency of voluntary stuttering accompanied changes in stuttering frequency, articulation rate, speech naturalness, and speech effort. In total, 12 persons who stutter aged 16-34 years participated. Participants read four 300-syllable passages during a control condition, and three voluntary stuttering conditions that involved attempting to produce purposeful, tension-free repetitions of initial sounds or syllables of a word for two or more repetitions (i.e., bouncing). The three voluntary stuttering conditions included bouncing on 5%, 10%, and 15% of syllables read. Friedman tests and follow-up Wilcoxon signed ranks tests were conducted for the statistical analyses. Stuttering frequency, articulation rate, and speech naturalness were significantly different between the voluntary stuttering conditions. Speech effort did not differ between the voluntary stuttering conditions. Stuttering frequency was significantly lower during the three voluntary stuttering conditions compared to the control condition, and speech effort was significantly lower during two of the three voluntary stuttering conditions compared to the control condition. Due to changes in articulation rate across the voluntary stuttering conditions, it is difficult to conclude, as has been suggested previously, that voluntary stuttering is the reason for stuttering reductions found when using voluntary stuttering techniques. Additionally, future investigations should examine different types of voluntary stuttering over an extended period of time to determine their impact on stuttering frequency, speech rate, speech naturalness, and speech effort.

  15. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  17. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  18. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  19. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  20. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  1. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  2. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  3. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  4. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  5. A Danish open-set speech corpus for competing-speech studies

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Dau, Torsten; Neher, Tobias

    2014-01-01

    Studies investigating speech-on-speech masking effects commonly use closed-set speech materials such as the coordinate response measure [Bolia et al. (2000). J. Acoust. Soc. Am. 107, 1065-1066]. However, these studies typically result in very low (i.e., negative) speech recognition thresholds (SRTs......) when the competing speech signals are spatially separated. To achieve higher SRTs that correspond more closely to natural communication situations, an open-set, low-context, multi-talker speech corpus was developed. Three sets of 268 unique Danish sentences were created, and each set was recorded...... with one of three professional female talkers. The intelligibility of each sentence in the presence of speech-shaped noise was measured. For each talker, 200 approximately equally intelligible sentences were then selected and systematically distributed into 10 test lists. Test list homogeneity was assessed...

  6. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech

    Science.gov (United States)

    Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-01-01

    A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production

  7. Non-invasive mapping of bilateral motor speech areas using navigated transcranial magnetic stimulation and functional magnetic resonance imaging.

    Science.gov (United States)

    Könönen, Mervi; Tamsi, Niko; Säisänen, Laura; Kemppainen, Samuli; Määttä, Sara; Julkunen, Petro; Jutila, Leena; Äikiä, Marja; Kälviäinen, Reetta; Niskanen, Eini; Vanninen, Ritva; Karjalainen, Pasi; Mervaala, Esa

    2015-06-15

    Navigated transcranial magnetic stimulation (nTMS) is a modern precise method to activate and study cortical functions noninvasively. We hypothesized that a combination of nTMS and functional magnetic resonance imaging (fMRI) could clarify the localization of functional areas involved with motor control and production of speech. Navigated repetitive TMS (rTMS) with short bursts was used to map speech areas on both hemispheres by inducing speech disruption during number recitation tasks in healthy volunteers. Two experienced video reviewers, blinded to the stimulated area, graded each trial offline according to possible speech disruption. The locations of speech disrupting nTMS trials were overlaid with fMRI activations of word generation task. Speech disruptions were produced on both hemispheres by nTMS, though there were more disruptive stimulation sites on the left hemisphere. Grade of the disruptions varied from subjective sensation to mild objectively recognizable disruption up to total speech arrest. The distribution of locations in which speech disruptions could be elicited varied among individuals. On the left hemisphere the locations of disturbing rTMS bursts with reviewers' verification followed the areas of fMRI activation. Similar pattern was not observed on the right hemisphere. The reviewer-verified speech disruptions induced by nTMS provided clinically relevant information, and fMRI might explain further the function of the cortical area. nTMS and fMRI complement each other, and their combination should be advocated when assessing individual localization of speech network. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  9. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  10. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...... on this model. The basic model used in this thesis is the harmonic model which is a commonly used model for describing the voiced part of the speech signal. We show that it can be beneficial to extend the model to take inharmonicities or the non-stationarity of speech into account. Extending the model...

  11. Office noise: Can headphones and masking sound attenuate distraction by background speech?

    Science.gov (United States)

    Jahncke, Helena; Björkeholm, Patrik; Marsh, John E; Odelius, Johan; Sörqvist, Patrik

    2016-11-22

    Background speech is one of the most disturbing noise sources at shared workplaces in terms of both annoyance and performance-related disruption. Therefore, it is important to identify techniques that can efficiently protect performance against distraction. It is also important that the techniques are perceived as satisfactory and are subjectively evaluated as effective in their capacity to reduce distraction. The aim of the current study was to compare three methods of attenuating distraction from background speech: masking a background voice with nature sound through headphones, masking a background voice with other voices through headphones and merely wearing headphones (without masking) as a way to attenuate the background sound. Quiet was deployed as a baseline condition. Thirty students participated in an experiment employing a repeated measures design. Performance (serial short-term memory) was impaired by background speech (1 voice), but this impairment was attenuated when the speech was masked - and in particular when it was masked by nature sound. Furthermore, perceived workload was lowest in the quiet condition and significantly higher in all other sound conditions. Notably, the headphones tested as a sound-attenuating device (i.e. without masking) did not protect against the effects of background speech on performance and subjective work load. Nature sound was the only masking condition that worked as a protector of performance, at least in the context of the serial recall task. However, despite the attenuation of distraction by nature sound, perceived workload was still high - suggesting that it is difficult to find a masker that is both effective and perceived as satisfactory.

  12. Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.

    Science.gov (United States)

    Schoenmaker, Esther; van de Par, Steven

    2016-01-01

    Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.

  13. An experimental Dutch keyboard-to-speech system for the speech impaired

    NARCIS (Netherlands)

    Deliege, R.J.H.

    1989-01-01

    An experimental Dutch keyboard-to-speech system has been developed to explor the possibilities and limitations of Dutch speech synthesis in a communication aid for the speech impaired. The system uses diphones and a formant synthesizer chip for speech synthesis. Input to the system is in

  14. Effects of noise and reverberation on speech perception and listening comprehension of children and adults in a classroom-like setting.

    Science.gov (United States)

    Klatte, Maria; Lachmann, Thomas; Meis, Markus

    2010-01-01

    The effects of classroom noise and background speech on speech perception, measured by word-to-picture matching, and listening comprehension, measured by execution of oral instructions, were assessed in first- and third-grade children and adults in a classroom-like setting. For speech perception, in addition to noise, reverberation time (RT) was varied by conducting the experiment in two virtual classrooms with mean RT = 0.47 versus RT = 1.1 s. Children were more impaired than adults by background sounds in both speech perception and listening comprehension. Classroom noise evoked a reliable disruption in children's speech perception even under conditions of short reverberation. RT had no effect on speech perception in silence, but evoked a severe increase in the impairments due to background sounds in all age groups. For listening comprehension, impairments due to background sounds were found in the children, stronger for first- than for third-graders, whereas adults were unaffected. Compared to classroom noise, background speech had a smaller effect on speech perception, but a stronger effect on listening comprehension, remaining significant when speech perception was controlled. This indicates that background speech affects higher-order cognitive processes involved in children's comprehension. Children's ratings of the sound-induced disturbance were low overall and uncorrelated to the actual disruption, indicating that the children did not consciously realize the detrimental effects. The present results confirm earlier findings on the substantial impact of noise and reverberation on children's speech perception, and extend these to classroom-like environmental settings and listening demands closely resembling those faced by children at school.

  15. [Transcortical aphasia and echolalia; problems of speech initiative].

    Science.gov (United States)

    Környey, E

    1975-05-01

    Transcortical aphasia accompanied by echolalia occurs with malacias involving the postero-median part of the frontal lobe which includes the supplementary motor field of Penfield and is nourished by the anterior cerebral artery. The syndrome manifests itself in such cases even in fine detials in the same form as does in Pick's atrophy. The same also holds true for cases in which a tumour involves the region mentioned. Sentences or fragments of sentences are echolalised; tendency to perseveration is very marked. It is hardly, if at all, possible to evaluate the verbal understanding of these patients. Analysis of their behaviour supports the assumption that they have not lost the adaptation to some situations. Echolalia is often associated with forced grasping and other compulsory phenomena. Therefore, it may be interpreted as a sign of disinhibition of the acusticomotor reflex present during the development of the speech. Competition between the intentionality and the appearance of compulsory phenomena greatly depends on the general condition of the patient, particularly on the clarity of consciousness. The integrity of the postero-median part of the frontal lobe is indespensable for a normal reaction by speech to stimuli received from the sensory areas. The influence of the supplementary motor field on speech intention seems to be linked to the dominant hemisphere. In case lesions of the territory of the anterior cerebral artery and the cortico-bulbar neuron system are coexisting in the dominant hemisphere, the speech disturbance shifts to complete motor aphasia. In such cases the pathomechanism is analogous to that of the syndrome of Liepmann, i.e., right-sided hemiparesis with left-sided apraxia. So-called transcortical motor aphasia without echolalia can be caused by loss of stimuli from the sensory fields.

  16. Speech Function and Speech Role in Carl Fredricksen's Dialogue on Up Movie

    OpenAIRE

    Rehana, Ridha; Silitonga, Sortha

    2013-01-01

    One aim of this article is to show through a concrete example how speech function and speech role used in movie. The illustrative example is taken from the dialogue of Up movie. Central to the analysis proper form of dialogue on Up movie that contain of speech function and speech role; i.e. statement, offer, question, command, giving, and demanding. 269 dialogue were interpreted by actor, and it was found that the use of speech function and speech role.

  17. Cleaning and decompression of inferior alveolar canal to treat dysesthesia and paresthesia following endodontic treatment of a third molar

    Directory of Open Access Journals (Sweden)

    Rudy Scala

    2014-01-01

    Full Text Available Endodontic overfilling involving the mandibular canal may cause an injury of the inferior alveolar nerve (IAN. We report a case of disabling dysesthesia and paresthesia of a 70-year-old man after endodontic treatment of his mandibular left third molar that caused leakage of root canal filling material into the mandibular canal. After radiographic evaluation, extraction of the third molar and distal osteotomy, a surgical exploration was performed and followed by removal of the material and decompression of the IAN. The patient reported an improvement in sensation and immediate disappearance of dysesthesia already from the first postoperative day.

  18. Cleaning and decompression of inferior alveolar canal to treat dysesthesia and paresthesia following endodontic treatment of a third molar.

    Science.gov (United States)

    Scala, Rudy; Cucchi, Alessandro; Cappellina, Luca; Ghensi, Paolo

    2014-01-01

    Endodontic overfilling involving the mandibular canal may cause an injury of the inferior alveolar nerve (IAN). We report a case of disabling dysesthesia and paresthesia of a 70-year-old man after endodontic treatment of his mandibular left third molar that caused leakage of root canal filling material into the mandibular canal. After radiographic evaluation, extraction of the third molar and distal osteotomy, a surgical exploration was performed and followed by removal of the material and decompression of the IAN. The patient reported an improvement in sensation and immediate disappearance of dysesthesia already from the first postoperative day.

  19. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  20. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  1. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  2. Speech disorders - children

    Science.gov (United States)

    ... disorder; Voice disorders; Vocal disorders; Disfluency; Communication disorder - speech disorder; Speech disorder - stuttering ... evaluation tools that can help identify and diagnose speech disorders: Denver Articulation Screening Examination Goldman-Fristoe Test of ...

  3. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  4. A glimpsing account of the role of temporal fine structure information in speech recognition.

    Science.gov (United States)

    Apoux, Frédéric; Healy, Eric W

    2013-01-01

    Many behavioral studies have reported a significant decrease in intelligibility when the temporal fine structure (TFS) of a sound mixture is replaced with noise or tones (i.e., vocoder processing). This finding has led to the conclusion that TFS information is critical for speech recognition in noise. How the normal -auditory system takes advantage of the original TFS, however, remains unclear. Three -experiments on the role of TFS in noise are described. All three experiments measured speech recognition in various backgrounds while manipulating the envelope, TFS, or both. One experiment tested the hypothesis that vocoder processing may artificially increase the apparent importance of TFS cues. Another experiment evaluated the relative contribution of the target and masker TFS by disturbing only the TFS of the target or that of the masker. Finally, a last experiment evaluated the -relative contribution of envelope and TFS information. In contrast to previous -studies, however, the original envelope and TFS were both preserved - to some extent - in all conditions. Overall, the experiments indicate a limited influence of TFS and suggest that little speech information is extracted from the TFS. Concomitantly, these experiments confirm that most speech information is carried by the temporal envelope in real-world conditions. When interpreted within the framework of the glimpsing model, the results of these experiments suggest that TFS is primarily used as a grouping cue to select the time-frequency regions -corresponding to the target speech signal.

  5. Speech recognition in natural background noise.

    Directory of Open Access Journals (Sweden)

    Julien Meyer

    Full Text Available In the real world, human speech recognition nearly always involves listening in background noise. The impact of such noise on speech signals and on intelligibility performance increases with the separation of the listener from the speaker. The present behavioral experiment provides an overview of the effects of such acoustic disturbances on speech perception in conditions approaching ecologically valid contexts. We analysed the intelligibility loss in spoken word lists with increasing listener-to-speaker distance in a typical low-level natural background noise. The noise was combined with the simple spherical amplitude attenuation due to distance, basically changing the signal-to-noise ratio (SNR. Therefore, our study draws attention to some of the most basic environmental constraints that have pervaded spoken communication throughout human history. We evaluated the ability of native French participants to recognize French monosyllabic words (spoken at 65.3 dB(A, reference at 1 meter at distances between 11 to 33 meters, which corresponded to the SNRs most revealing of the progressive effect of the selected natural noise (-8.8 dB to -18.4 dB. Our results showed that in such conditions, identity of vowels is mostly preserved, with the striking peculiarity of the absence of confusion in vowels. The results also confirmed the functional role of consonants during lexical identification. The extensive analysis of recognition scores, confusion patterns and associated acoustic cues revealed that sonorant, sibilant and burst properties were the most important parameters influencing phoneme recognition. . Altogether these analyses allowed us to extract a resistance scale from consonant recognition scores. We also identified specific perceptual consonant confusion groups depending of the place in the words (onset vs. coda. Finally our data suggested that listeners may access some acoustic cues of the CV transition, opening interesting perspectives for

  6. Speech recognition in natural background noise.

    Science.gov (United States)

    Meyer, Julien; Dentel, Laure; Meunier, Fanny

    2013-01-01

    In the real world, human speech recognition nearly always involves listening in background noise. The impact of such noise on speech signals and on intelligibility performance increases with the separation of the listener from the speaker. The present behavioral experiment provides an overview of the effects of such acoustic disturbances on speech perception in conditions approaching ecologically valid contexts. We analysed the intelligibility loss in spoken word lists with increasing listener-to-speaker distance in a typical low-level natural background noise. The noise was combined with the simple spherical amplitude attenuation due to distance, basically changing the signal-to-noise ratio (SNR). Therefore, our study draws attention to some of the most basic environmental constraints that have pervaded spoken communication throughout human history. We evaluated the ability of native French participants to recognize French monosyllabic words (spoken at 65.3 dB(A), reference at 1 meter) at distances between 11 to 33 meters, which corresponded to the SNRs most revealing of the progressive effect of the selected natural noise (-8.8 dB to -18.4 dB). Our results showed that in such conditions, identity of vowels is mostly preserved, with the striking peculiarity of the absence of confusion in vowels. The results also confirmed the functional role of consonants during lexical identification. The extensive analysis of recognition scores, confusion patterns and associated acoustic cues revealed that sonorant, sibilant and burst properties were the most important parameters influencing phoneme recognition. . Altogether these analyses allowed us to extract a resistance scale from consonant recognition scores. We also identified specific perceptual consonant confusion groups depending of the place in the words (onset vs. coda). Finally our data suggested that listeners may access some acoustic cues of the CV transition, opening interesting perspectives for future studies.

  7. Noise disturbance in open-plan study environments: a field study on noise sources, student tasks and room acoustic parameters.

    Science.gov (United States)

    Braat-Eggen, P Ella; van Heijst, Anne; Hornikx, Maarten; Kohlrausch, Armin

    2017-09-01

    The aim of this study is to gain more insight in the assessment of noise in open-plan study environments and to reveal correlations between noise disturbance experienced by students and the noise sources they perceive, the tasks they perform and the acoustic parameters of the open-plan study environment they work in. Data were collected in five open-plan study environments at universities in the Netherlands. A questionnaire was used to investigate student tasks, perceived sound sources and their perceived disturbance, and sound measurements were performed to determine the room acoustic parameters. This study shows that 38% of the surveyed students are disturbed by background noise in an open-plan study environment. Students are mostly disturbed by speech when performing complex cognitive tasks like studying for an exam, reading and writing. Significant but weak correlations were found between the room acoustic parameters and noise disturbance of students. Practitioner Summary: A field study was conducted to gain more insight in the assessment of noise in open-plan study environments at universities in the Netherlands. More than one third of the students was disturbed by noise. An interaction effect was found for task type, source type and room acoustic parameters.

  8. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  9. Speech Perception and Short-Term Memory Deficits in Persistent Developmental Speech Disorder

    Science.gov (United States)

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2006-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech…

  10. Maxillary distraction versus orthognathic surgery in cleft lip and palate patients: effects on speech and velopharyngeal function.

    Science.gov (United States)

    Chua, H D P; Whitehill, T L; Samman, N; Cheung, L K

    2010-07-01

    This clinical randomized controlled trial was performed to compare the effects of distraction osteogenesis (DO) and conventional orthognathic surgery (CO) on velopharyngeal function and speech outcomes in cleft lip and palate (CLP) patients. Twenty-one CLP patients who required maxillary advancement ranging from 4 to 10 mm were recruited and randomly assigned to either CO or DO. Evaluation of resonance and nasal emission, nasoendoscopic velopharyngeal assessment and nasometry were performed preoperatively and at a minimum of two postoperative times: 3-8 months (mean 4 months) and 12-29 months (mean 17 months). Results showed no significant differences in speech and velopharyngeal function changes between the two groups. No correlation was found between the amount of advancement and the outcome measures. It was concluded that DO has no advantage over CO for the purpose of preventing velopharyngeal incompetence and speech disturbance in moderate cleft maxillary advancement. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  11. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  12. Speech and Language Delay

    Science.gov (United States)

    ... OTC Relief for Diarrhea Home Diseases and Conditions Speech and Language Delay Condition Speech and Language Delay Share Print Table of Contents1. ... Treatment6. Everyday Life7. Questions8. Resources What is a speech and language delay? A speech and language delay ...

  13. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  14. The role of intraoperative positioning of the inferior alveolar nerve on postoperative paresthesia after bilateral sagittal split osteotomy of the mandible: prospective clinical study

    Czech Academy of Sciences Publication Activity Database

    Hanzelka, T.; Foltán, R.; Pavlíková, G.; Horká, E.; Šedý, Jiří

    2011-01-01

    Roč. 40, č. 9 (2011), s. 901-906 ISSN 0901-5027 R&D Projects: GA MŠk(CZ) LC554; GA ČR GAP304/10/0320 Grant - others:GA MŠk(CZ) 1M0538 Program:1M Institutional research plan: CEZ:AV0Z50390703 Keywords : orthognathic surgery * paresthesia * bilateral sagittal split osteotomy Subject RIV: FJ - Surgery incl. Transplants; FH - Neurology (UEM-P) Impact factor: 1.506, year: 2011

  15. The linguistics of schizophrenia: thought disturbance as language pathology across positive symptoms.

    Science.gov (United States)

    Hinzen, Wolfram; Rosselló, Joana

    2015-01-01

    We hypothesize that linguistic (dis-)organization in the schizophrenic brain plays a more central role in the pathogenesis of this disease than commonly supposed. Against the standard view, that schizophrenia is a disturbance of thought or selfhood, we argue that the origins of the relevant forms of thought and selfhood at least partially depend on language. The view that they do not is premised by a theoretical conception of language that we here identify as 'Cartesian' and contrast with a recent 'un-Cartesian' model. This linguistic model empirically argues for both (i) a one-to-one correlation between human-specific thought or meaning and forms of grammatical organization, and (ii) an integrative and co-dependent view of linguistic cognition and its sensory-motor dimensions. Core dimensions of meaning mediated by grammar on this model specifically concern forms of referential and propositional meaning. A breakdown of these is virtually definitional of core symptoms. Within this model the three main positive symptoms of schizophrenia fall into place as failures in language-mediated forms of meaning, manifest either as a disorder of speech perception (Auditory Verbal Hallucinations), abnormal speech production running without feedback control (Formal Thought Disorder), or production of abnormal linguistic content (Delusions). Our hypothesis makes testable predictions for the language profile of schizophrenia across symptoms; it simplifies the cognitive neuropsychology of schizophrenia while not being inconsistent with a pattern of neurocognitive deficits and their correlations with symptoms; and it predicts persistent findings on disturbances of language-related circuitry in the schizophrenic brain.

  16. The linguistics of schizophrenia: thought disturbance as language pathology across positive symptoms

    Directory of Open Access Journals (Sweden)

    Wolfram eHinzen

    2015-07-01

    Full Text Available We hypothesize that linguistic (dis- organization in the schizophrenic brain plays a much more central role in the pathogenesis of this disease than commonly supposed. Against the standard view, that schizophrenia is a disturbance of thought or selfhood, we argue that the origins of the relevant forms of thought and selfhood at least partially depend on language. The view that they do not is premised by a theoretical conception of language that we here identify as ‘Cartesian’ and contrast with a recent ‘un-Cartesian’ model. This linguistic model empirically argues for both (i a one-to-one correlation between human-specific thought or meaning and forms of grammatical organization, and (ii an integrative and co-dependent view of linguistic cognition and its sensory-motor dimensions. Core dimensions of meaning mediated by grammar on this model specifically concern forms of referential and propositional meaning. A breakdown of these is virtually definitional of core symptoms. Within this model the three main positive symptoms of schizophrenia fall into place as failures in language-mediated forms of meaning, manifest either as a disorder of speech perception (Auditory Verbal Hallucinations, AVHs, abnormal speech production running without feedback control (Formal Thought Disorder, FTD, or production of abnormal linguistic content (Delusions. Our hypothesis makes testable predictions for the language profile of schizophrenia across symptoms; it simplifies the cognitive neuropsychology of schizophrenia while not being inconsistent with a pattern of neurocognitive deficits and their correlations with symptoms; and it predicts persistent findings on disturbances of language-related circuitry in the schizophrenic brain.

  17. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  18. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy ... most kids with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech ...

  19. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  20. Developmental apraxia of speech in children. Quantitive assessment of speech characteristics

    NARCIS (Netherlands)

    Thoonen, G.H.J.

    1998-01-01

    Developmental apraxia of speech (DAS) in children is a speech disorder, supposed to have a neurological origin, which is commonly considered to result from particular deficits in speech processing (i.e., phonological planning, motor programming). However, the label DAS has often been used as

  1. Disturbed neural circuits in a subtype of chronic catatonic schizophrenia demonstrated by F-18-FDG-PET and F-18-DOPA-PET

    International Nuclear Information System (INIS)

    Lauer, M.; Beckmann, H.; Stoeber, G.; Schirrmeister, H.; Gerhard, A.; Ellitok, E.; Reske, S.N.

    2001-01-01

    Permanent verbal, visual scenic and coenaestetic hallucinations are the most prominent psychopathological symptoms aside from psychomotor disorders in speech-sluggish catatonia, a subtype of chronic catatonic schizophrenia according to Karl Leonhard. These continuous hallucinations serve as an excellent paradigm for the investigation of the assumed functional disturbances of cortical circuits in schizophrenia. Data from positron emission tomography (F-18-FDG-PET and F-18-DOPA-PET) from three patients with this rare phenotype were available (two cases of simple speech-sluggish catatonia, one case of a combined speech-prompt/speech-sluggish subtype) and were compared with a control collective. During their permanent hallucinations, all catatonic patients showed a clear bitemporal hypometabolism in the F-18-FDG-PET. Both patients with the simple speech-sluggish catatonia showed an additional bilateral thalamic hypermetabolism and an additional bilateral hypometabolism of the frontal cortex, especially on the left side. In contrast, the patient with the combined speech-prompt/speech-sluggish catatonia showed a bilateral thalamic hypo-metabolism combined with a bifrontal cortical hypermetabolism. However, the left/right ratio of the frontal cortex also showed a lateralization effect with a clear relative hypometabolism of the left frontal cortex. The F-18-DOPA-PET of both schizophrenic patients with simple speech-sluggish catatonia showed a normal F-18-DOPA storage in the striatum, whereas in the right putamen of the patient with the combined form a higher right/left ratio in F-DOPA storage was discernible, indicating an additional lateralized influence of the dopaminergic system in this subtype of chronic catatonic schizophrenia. (author)

  2. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems.

    Science.gov (United States)

    Greene, Beth G; Logan, John S; Pisoni, David B

    1986-03-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.

  3. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    Science.gov (United States)

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  4. Aberrant connectivity of areas for decoding degraded speech in patients with auditory verbal hallucinations.

    Science.gov (United States)

    Clos, Mareike; Diederen, Kelly M J; Meijering, Anne Lotte; Sommer, Iris E; Eickhoff, Simon B

    2014-03-01

    Auditory verbal hallucinations (AVH) are a hallmark of psychotic experience. Various mechanisms including misattribution of inner speech and imbalance between bottom-up and top-down factors in auditory perception potentially due to aberrant connectivity between frontal and temporo-parietal areas have been suggested to underlie AVH. Experimental evidence for disturbed connectivity of networks sustaining auditory-verbal processing is, however, sparse. We compared functional resting-state connectivity in 49 psychotic patients with frequent AVH and 49 matched controls. The analysis was seeded from the left middle temporal gyrus (MTG), thalamus, angular gyrus (AG) and inferior frontal gyrus (IFG) as these regions are implicated in extracting meaning from impoverished speech-like sounds. Aberrant connectivity was found for all seeds. Decreased connectivity was observed between the left MTG and its right homotope, between the left AG and the surrounding inferior parietal cortex (IPC) and the left inferior temporal gyrus, between the left thalamus and the right cerebellum, as well as between the left IFG and left IPC, and dorsolateral and ventrolateral prefrontal cortex (DLPFC/VLPFC). Increased connectivity was observed between the left IFG and the supplementary motor area (SMA) and the left insula and between the left thalamus and the left fusiform gyrus/hippocampus. The predisposition to experience AVH might result from decoupling between the speech production system (IFG, insula and SMA) and the self-monitoring system (DLPFC, VLPFC, IPC) leading to misattribution of inner speech. Furthermore, decreased connectivity between nodes involved in speech processing (AG, MTG) and other regions implicated in auditory processing might reflect aberrant top-down influences in AVH.

  5. The speech perception skills of children with and without speech sound disorder.

    Science.gov (United States)

    Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie

    To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  7. Eczema Is Associated with Childhood Speech Disorder: A Retrospective Analysis from the National Survey of Children's Health and the National Health Interview Survey.

    Science.gov (United States)

    Strom, Mark A; Silverberg, Jonathan I

    2016-01-01

    To determine if eczema is associated with an increased risk of a speech disorder. We analyzed data on 354,416 children and adolescents from 19 US population-based cohorts: the 2003-2004 and 2007-2008 National Survey of Children's Health and 1997-2013 National Health Interview Survey, each prospective, questionnaire-based cohorts. In multivariate survey logistic regression models adjusting for sociodemographics and comorbid allergic disease, eczema was significantly associated with higher odds of speech disorder in 12 of 19 cohorts (P speech disorder in children with eczema was 4.7% (95% CI 4.5%-5.0%) compared with 2.2% (95% CI 2.2%-2.3%) in children without eczema. In pooled multivariate analysis, eczema was associated with increased odds of speech disorder (aOR [95% CI] 1.81 [1.57-2.05], P speech disorder. History of eczema was associated with moderate (2.35 [1.34-4.10], P = .003) and severe (2.28 [1.11-4.72], P = .03) speech disorder. Finally, significant interactions were found, such that children with both eczema and attention deficit disorder with or without hyperactivity or sleep disturbance had vastly increased risk of speech disorders than either by itself. Pediatric eczema may be associated with increased risk of speech disorder. Further, prospective studies are needed to characterize the exact nature of this association. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  9. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  10. Clear Speech - Mere Speech? How segmental and prosodic speech reduction shape the impression that speakers create on listeners

    DEFF Research Database (Denmark)

    Niebuhr, Oliver

    2017-01-01

    of reduction levels and perceived speaker attributes in which moderate reduction can make a better impression on listeners than no reduction. In addition to its relevance in reduction models and theories, this interplay is instructive for various fields of speech application from social robotics to charisma...... whether variation in the degree of reduction also has a systematic effect on the attributes we ascribe to the speaker who produces the speech signal. A perception experiment was carried out for German in which 46 listeners judged whether or not speakers showing 3 different combinations of segmental...... and prosodic reduction levels (unreduced, moderately reduced, strongly reduced) are appropriately described by 13 physical, social, and cognitive attributes. The experiment shows that clear speech is not mere speech, and less clear speech is not just reduced either. Rather, results revealed a complex interplay...

  11. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  12. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  13. PRACTICING SPEECH THERAPY INTERVENTION FOR SOCIAL INTEGRATION OF CHILDREN WITH SPEECH DISORDERS

    Directory of Open Access Journals (Sweden)

    Martin Ofelia POPESCU

    2016-11-01

    Full Text Available The article presents a concise speech correction intervention program in of dyslalia in conjunction with capacity development of intra, interpersonal and social integration of children with speech disorders. The program main objectives represent: the potential increasing of individual social integration by correcting speech disorders in conjunction with intra- and interpersonal capacity, the potential growth of children and community groups for social integration by optimizing the socio-relational context of children with speech disorder. In the program were included 60 children / students with dyslalia speech disorders (monomorphic and polymorphic dyslalia, from 11 educational institutions - 6 kindergartens and 5 schools / secondary schools, joined with inter-school logopedic centre (CLI from Targu Jiu city and areas of Gorj district. The program was implemented under the assumption that therapeutic-formative intervention to correct speech disorders and facilitate the social integration will lead, in combination with correct pronunciation disorders, to social integration optimization of children with speech disorders. The results conirm the hypothesis and gives facts about the intervention program eficiency.

  14. Schizophrenia alters intra-network functional connectivity in the caudate for detecting speech under informational speech masking conditions.

    Science.gov (United States)

    Zheng, Yingjun; Wu, Chao; Li, Juanhua; Li, Ruikeng; Peng, Hongjun; She, Shenglin; Ning, Yuping; Li, Liang

    2018-04-04

    Speech recognition under noisy "cocktail-party" environments involves multiple perceptual/cognitive processes, including target detection, selective attention, irrelevant signal inhibition, sensory/working memory, and speech production. Compared to health listeners, people with schizophrenia are more vulnerable to masking stimuli and perform worse in speech recognition under speech-on-speech masking conditions. Although the schizophrenia-related speech-recognition impairment under "cocktail-party" conditions is associated with deficits of various perceptual/cognitive processes, it is crucial to know whether the brain substrates critically underlying speech detection against informational speech masking are impaired in people with schizophrenia. Using functional magnetic resonance imaging (fMRI), this study investigated differences between people with schizophrenia (n = 19, mean age = 33 ± 10 years) and their matched healthy controls (n = 15, mean age = 30 ± 9 years) in intra-network functional connectivity (FC) specifically associated with target-speech detection under speech-on-speech-masking conditions. The target-speech detection performance under the speech-on-speech-masking condition in participants with schizophrenia was significantly worse than that in matched healthy participants (healthy controls). Moreover, in healthy controls, but not participants with schizophrenia, the strength of intra-network FC within the bilateral caudate was positively correlated with the speech-detection performance under the speech-masking conditions. Compared to controls, patients showed altered spatial activity pattern and decreased intra-network FC in the caudate. In people with schizophrenia, the declined speech-detection performance under speech-on-speech masking conditions is associated with reduced intra-caudate functional connectivity, which normally contributes to detecting target speech against speech masking via its functions of suppressing masking-speech signals.

  15. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  16. Speech and Speech-Related Quality of Life After Late Palate Repair: A Patient's Perspective.

    Science.gov (United States)

    Schönmeyr, Björn; Wendby, Lisa; Sharma, Mitali; Jacobson, Lia; Restrepo, Carolina; Campbell, Alex

    2015-07-01

    Many patients with cleft palate deformities worldwide receive treatment at a later age than is recommended for normal speech to develop. The outcomes after late palate repairs in terms of speech and quality of life (QOL) still remain largely unstudied. In the current study, questionnaires were used to assess the patients' perception of speech and QOL before and after primary palate repair. All of the patients were operated at a cleft center in northeast India and had a cleft palate with a normal lip or with a cleft lip that had been previously repaired. A total of 134 patients (7-35 years) were interviewed preoperatively and 46 patients (7-32 years) were assessed in the postoperative survey. The survey showed that scores based on the speech handicap index, concerning speech and speech-related QOL, did not improve postoperatively. In fact, the questionnaires indicated that the speech became more unpredictable (P reported that their self-confidence had improved after the operation. Thus, the majority of interviewed patients who underwent late primary palate repair were satisfied with the surgery. At the same time, speech and speech-related QOL did not improve according to the speech handicap index-based survey. Speech predictability may even become worse and nasal regurgitation may increase after late palate repair, according to these results.

  17. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  18. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  19. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  1. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  2. Disturbance hydrology: Preparing for an increasingly disturbed future

    Science.gov (United States)

    Mirus, Benjamin B.; Ebel, Brian A.; Mohr, Christian H.; Zegre, Nicolas

    2017-01-01

    This special issue is the result of several fruitful conference sessions on disturbance hydrology, which started at the 2013 AGU Fall Meeting in San Francisco and have continued every year since. The stimulating presentations and discussions surrounding those sessions have focused on understanding both the disruption of hydrologic functioning following discrete disturbances, as well as the subsequent recovery or change within the affected watershed system. Whereas some hydrologic disturbances are directly linked to anthropogenic activities, such as resource extraction, the contributions to this special issue focus primarily on those with indirect or less pronounced human involvement, such as bark-beetle infestation, wildfire, and other natural hazards. However, human activities are enhancing the severity and frequency of these seemingly natural disturbances, thereby contributing to acute hydrologic problems and hazards. Major research challenges for our increasingly disturbed planet include the lack of continuous pre- and post-disturbance monitoring, hydrologic impacts that vary spatially and temporally based on environmental and hydroclimatic conditions, and the preponderance of overlapping or compounding disturbance sequences. In addition, a conceptual framework for characterizing commonalities and differences among hydrologic disturbances is still in its infancy. In this introduction to the special issue, we advance the fusion of concepts and terminology from ecology and hydrology to begin filling this gap. We briefly explore some preliminary approaches for comparing different disturbances and their hydrologic impacts, which provides a starting point for further dialogue and research progress.

  3. The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease

    Science.gov (United States)

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-01-01

    Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…

  4. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    Science.gov (United States)

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  5. The treatment of apraxia of speech : Speech and music therapy, an innovative joint effort

    NARCIS (Netherlands)

    Hurkmans, Josephus Johannes Stephanus

    2016-01-01

    Apraxia of Speech (AoS) is a neurogenic speech disorder. A wide variety of behavioural methods have been developed to treat AoS. Various therapy programmes use musical elements to improve speech production. A unique therapy programme combining elements of speech therapy and music therapy is called

  6. Disturbing forest disturbances

    Energy Technology Data Exchange (ETDEWEB)

    Volney, W.J.A.; Hirsch, K.G. [Natural Resources Canada, Canadian Forest Service, Northern Forestry Centre, Edmonton, AB (Canada)

    2005-10-01

    This paper described the role that disturbances play in maintaining the ecological integrity of Canadian boreal forests. Potential adaptation options to address the challenges that these disturbances present were also examined. Many forest ecosystems need fire for regeneration, while other forests rely on a cool, wet disintegration process driven by insects and commensal fungi feeding on trees to effect renewal. While there are characteristic natural, temporal and spatial patterns to these disturbances, recent work has demonstrated that the disturbances are being perturbed by climatic change that has been compounded by anthropogenic disturbances in forests. Fire influences species composition and age structure, regulates forest insects and diseases, affects nutrient cycling and energy fluxes, and maintains the productivity of different habitats. Longer fire seasons as a result of climatic change will lead to higher intensity fires that may more easily evade initial attacks and become problematic. Fire regimes elevated beyond the range of natural variation will have a dramatic effect on the regional distribution and functioning of forest ecosystems and pose a threat to the safety and prosperity of people. While it was acknowledged that if insect outbreaks were to be controlled on the entire forest estate, the productivity represented by dead wood would be lost, it was suggested that insects such as the forest tent caterpillar and the spruce bud worm may also pose a greater threat as the climate gets warmer and drier. Together with fungal associates, saproxylic arthropods are active in nutrient cycling and ultimately determine the fertility of forest sites. It was suggested that the production of an age class structure and forest mosaic would render the forest landscape less vulnerable to the more negative aspects of climate change on vegetation response. It was concluded that novel management design paradigms are needed to successfully reduce the risk from threats

  7. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  8. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  9. An analysis of the masking of speech by competing speech using self-report data.

    Science.gov (United States)

    Agus, Trevor R; Akeroyd, Michael A; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.

  10. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  11. Part-of-speech effects on text-to-speech synthesis

    CSIR Research Space (South Africa)

    Schlunz, GI

    2010-11-01

    Full Text Available One of the goals of text-to-speech (TTS) systems is to produce natural-sounding synthesised speech. Towards this end various natural language processing (NLP) tasks are performed to model the prosodic aspects of the TTS voice. One of the fundamental...

  12. 75 FR 26701 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-05-12

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... proposed compensation rates for Interstate TRS, Speech-to-Speech Services (STS), Captioned Telephone... costs reported in the data submitted to NECA by VRS providers. In this regard, document DA 10-761 also...

  13. Predicting automatic speech recognition performance over communication channels from instrumental speech quality and intelligibility scores

    NARCIS (Netherlands)

    Gallardo, L.F.; Möller, S.; Beerends, J.

    2017-01-01

    The performance of automatic speech recognition based on coded-decoded speech heavily depends on the quality of the transmitted signals, determined by channel impairments. This paper examines relationships between speech recognition performance and measurements of speech quality and intelligibility

  14. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  15. 75 FR 54040 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-09-03

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...; speech-to-speech (STS); pay-per-call (900) calls; types of calls; and equal access to interexchange... of a report, due April 16, 2011, addressing whether it is necessary for the waivers to remain in...

  16. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    Science.gov (United States)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  17. Disturbance Hydrology: Preparing for an Increasingly Disturbed Future

    Science.gov (United States)

    Mirus, Benjamin B.; Ebel, Brian A.; Mohr, Christian H.; Zegre, Nicolas

    2017-12-01

    This special issue is the result of several fruitful conference sessions on disturbance hydrology, which started at the 2013 AGU Fall Meeting in San Francisco and have continued every year since. The stimulating presentations and discussions surrounding those sessions have focused on understanding both the disruption of hydrologic functioning following discrete disturbances, as well as the subsequent recovery or change within the affected watershed system. Whereas some hydrologic disturbances are directly linked to anthropogenic activities, such as resource extraction, the contributions to this special issue focus primarily on those with indirect or less pronounced human involvement, such as bark-beetle infestation, wildfire, and other natural hazards. However, human activities are enhancing the severity and frequency of these seemingly natural disturbances, thereby contributing to acute hydrologic problems and hazards. Major research challenges for our increasingly disturbed planet include the lack of continuous pre and postdisturbance monitoring, hydrologic impacts that vary spatially and temporally based on environmental and hydroclimatic conditions, and the preponderance of overlapping or compounding disturbance sequences. In addition, a conceptual framework for characterizing commonalities and differences among hydrologic disturbances is still in its infancy. In this introduction to the special issue, we advance the fusion of concepts and terminology from ecology and hydrology to begin filling this gap. We briefly explore some preliminary approaches for comparing different disturbances and their hydrologic impacts, which provides a starting point for further dialogue and research progress.

  18. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  19. Emotionally conditioning the target-speech voice enhances recognition of the target speech under "cocktail-party" listening conditions.

    Science.gov (United States)

    Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang

    2018-05-01

    Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.

  20. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  1. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  2. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  3. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  4. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  5. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  6. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  7. Effect of gap detection threshold on consistency of speech in children with speech sound disorder.

    Science.gov (United States)

    Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz

    2017-02-01

    The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Speech Perception as a Multimodal Phenomenon

    OpenAIRE

    Rosenblum, Lawrence D.

    2008-01-01

    Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal s...

  9. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  10. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  11. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    Science.gov (United States)

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  12. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  13. Systematic Studies of Modified Vocalization: The Effect of Speech Rate on Speech Production Measures during Metronome-Paced Speech in Persons Who Stutter

    Science.gov (United States)

    Davidow, Jason H.

    2014-01-01

    Background: Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control…

  14. TongueToSpeech (TTS): Wearable wireless assistive device for augmented speech.

    Science.gov (United States)

    Marjanovic, Nicholas; Piccinini, Giacomo; Kerr, Kevin; Esmailbeigi, Hananeh

    2017-07-01

    Speech is an important aspect of human communication; individuals with speech impairment are unable to communicate vocally in real time. Our team has developed the TongueToSpeech (TTS) device with the goal of augmenting speech communication for the vocally impaired. The proposed device is a wearable wireless assistive device that incorporates a capacitive touch keyboard interface embedded inside a discrete retainer. This device connects to a computer, tablet or a smartphone via Bluetooth connection. The developed TTS application converts text typed by the tongue into audible speech. Our studies have concluded that an 8-contact point configuration between the tongue and the TTS device would yield the best user precision and speed performance. On average using the TTS device inside the oral cavity takes 2.5 times longer than the pointer finger using a T9 (Text on 9 keys) keyboard configuration to type the same phrase. In conclusion, we have developed a discrete noninvasive wearable device that allows the vocally impaired individuals to communicate in real time.

  15. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  17. Free Speech Yearbook 1978.

    Science.gov (United States)

    Phifer, Gregg, Ed.

    The 17 articles in this collection deal with theoretical and practical freedom of speech issues. The topics include: freedom of speech in Marquette Park, Illinois; Nazis in Skokie, Illinois; freedom of expression in the Confederate States of America; Robert M. LaFollette's arguments for free speech and the rights of Congress; the United States…

  18. Visual context enhanced. The joint contribution of iconic gestures and visible speech to degraded speech comprehension.

    NARCIS (Netherlands)

    Drijvers, L.; Özyürek, A.

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech

  19. Multisensory integration of speech sounds with letters vs. visual speech : only visual speech induces the mismatch negativity

    NARCIS (Netherlands)

    Stekelenburg, J.J.; Keetels, M.N.; Vroomen, J.H.M.

    2018-01-01

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect.

  20. Speech Research

    Science.gov (United States)

    Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.

  1. Represented Speech in Qualitative Health Research

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2017-01-01

    Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

  2. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  3. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  4. Speech and language adverse effects after thalamotomy and deep brain stimulation in patients with movement disorders: A meta-analysis.

    Science.gov (United States)

    Alomar, Soha; King, Nicolas K K; Tam, Joseph; Bari, Ausaf A; Hamani, Clement; Lozano, Andres M

    2017-01-01

    The thalamus has been a surgical target for the treatment of various movement disorders. Commonly used therapeutic modalities include ablative and nonablative procedures. A major clinical side effect of thalamic surgery is the appearance of speech problems. This review summarizes the data on the development of speech problems after thalamic surgery. A systematic review and meta-analysis was performed using nine databases, including Medline, Web of Science, and Cochrane Library. We also checked for articles by searching citing and cited articles. We retrieved studies between 1960 and September 2014. Of a total of 2,320 patients, 19.8% (confidence interval: 14.8-25.9) had speech difficulty after thalamotomy. Speech difficulty occurred in 15% (confidence interval: 9.8-22.2) of those treated with a unilaterally and 40.6% (confidence interval: 29.5-52.8) of those treated bilaterally. Speech impairment was noticed 2- to 3-fold more commonly after left-sided procedures (40.7% vs. 15.2%). Of the 572 patients that underwent DBS, 19.4% (confidence interval: 13.1-27.8) experienced speech difficulty. Subgroup analysis revealed that this complication occurs in 10.2% (confidence interval: 7.4-13.9) of patients treated unilaterally and 34.6% (confidence interval: 21.6-50.4) treated bilaterally. After thalamotomy, the risk was higher in Parkinson's patients compared to patients with essential tremor: 19.8% versus 4.5% in the unilateral group and 42.5% versus 13.9% in the bilateral group. After DBS, this rate was higher in essential tremor patients. Both lesioning and stimulation thalamic surgery produce adverse effects on speech. Left-sided and bilateral procedures are approximately 3-fold more likely to cause speech difficulty. This effect was higher after thalamotomy compared to DBS. In the thalamotomy group, the risk was higher in Parkinson's patients, whereas in the DBS group it was higher in patients with essential tremor. Understanding the pathophysiology of speech

  5. Transience after disturbance: Obligate species recovery dynamics depend on disturbance duration.

    Science.gov (United States)

    Singer, Alexander; Johst, Karin

    2017-06-01

    After a disturbance event, population recovery becomes an important species response that drives ecosystem dynamics. Yet, it is unclear how interspecific interactions impact species recovery from a disturbance and which role the disturbance duration (pulse or press) plays. Here, we analytically derive conditions that govern the transient recovery dynamics from disturbance of a host and its obligately dependent partner in a two-species metapopulation model. We find that, after disturbance, species recovery dynamics depend on the species' role (i.e. host or obligately dependent species) as well as the duration of disturbance. Host recovery starts immediately after the disturbance. In contrast, for obligate species, recovery depends on disturbance duration. After press disturbance, which allows dynamics to equilibrate during disturbance, obligate species immediately start to recover. Yet, after pulse disturbance, obligate species continue declining although their hosts have already begun to increase. Effectively, obligate species recovery is delayed until a necessary host threshold occupancy is reached. Obligates' delayed recovery arises solely from interspecific interactions independent of dispersal limitations, which contests previous explanations. Delayed recovery exerts a two-fold negative effect, because populations continue declining to even smaller population sizes and the phase of increased risk from demographic stochastic extinction in small populations is prolonged. We argue that delayed recovery and its determinants -species interactions and disturbance duration - have to be considered in biodiversity management. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

    Science.gov (United States)

    Drijvers, Linda; Ozyurek, Asli

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method:…

  8. Quality of synthetic speech perceptual dimensions, influencing factors, and instrumental assessment

    CERN Document Server

    Hinterleitner, Florian

    2017-01-01

    This book reviews research towards perceptual quality dimensions of synthetic speech, compares these findings with the state of the art, and derives a set of five universal perceptual quality dimensions for TTS signals. They are: (i) naturalness of voice, (ii) prosodic quality, (iii) fluency and intelligibility, (iv) absence of disturbances, and (v) calmness. Moreover, a test protocol for the efficient indentification of those dimensions in a listening test is introduced. Furthermore, several factors influencing these dimensions are examined. In addition, different techniques for the instrumental quality assessment of TTS signals are introduced, reviewed and tested. Finally, the requirements for the integration of an instrumental quality measure into a concatenative TTS system are examined.

  9. Speech enhancement using emotion dependent codebooks

    NARCIS (Netherlands)

    Naidu, D.H.R.; Srinivasan, S.

    2012-01-01

    Several speech enhancement approaches utilize trained models of clean speech data, such as codebooks, Gaussian mixtures, and hidden Markov models. These models are typically trained on neutral clean speech data, without any emotion. However, in practical scenarios, emotional speech is a common

  10. Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content

    Science.gov (United States)

    Brouwer, Susanne; Van Engen, Kristin J.; Calandruccio, Lauren; Bradlow, Ann R.

    2012-01-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. PMID:22352516

  11. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  12. Recognizing speech in a novel accent: the motor theory of speech perception reframed.

    Science.gov (United States)

    Moulin-Frier, Clément; Arbib, Michael A

    2013-08-01

    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory.

  13. Advocate: A Distributed Architecture for Speech-to-Speech Translation

    Science.gov (United States)

    2009-01-01

    tecture, are either wrapped natural-language processing ( NLP ) components or objects developed from scratch using the architecture’s API. GATE is...framework, we put together a demonstration Arabic -to- English speech translation system using both internally developed ( Arabic speech recognition and MT...conditions of our Arabic S2S demonstration system described earlier. Once again, the data size was varied and eighty identical requests were

  14. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  15. Speech Planning Happens before Speech Execution: Online Reaction Time Methods in the Study of Apraxia of Speech

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa

    2012-01-01

    Purpose: The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Method: Following a brief…

  16. Predicting speech intelligibility in adverse conditions: evaluation of the speech-based envelope power spectrum model

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2011-01-01

    conditions by comparing predictions to measured data from [Kjems et al. (2009). J. Acoust. Soc. Am. 126 (3), 1415-1426] where speech is mixed with four different interferers, including speech-shaped noise, bottle noise, car noise, and cafe noise. The model accounts well for the differences in intelligibility......The speech-based envelope power spectrum model (sEPSM) [Jørgensen and Dau (2011). J. Acoust. Soc. Am., 130 (3), 1475–1487] estimates the envelope signal-to-noise ratio (SNRenv) of distorted speech and accurately describes the speech recognition thresholds (SRT) for normal-hearing listeners...... observed for the different interferers. None of the standardized models successfully describe these data....

  17. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  18. Cleft Audit Protocol for Speech (CAPS-A): A Comprehensive Training Package for Speech Analysis

    Science.gov (United States)

    Sell, D.; John, A.; Harding-Bell, A.; Sweeney, T.; Hegarty, F.; Freeman, J.

    2009-01-01

    Background: The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been…

  19. Perceived liveliness and speech comprehensibility in aphasia : the effects of direct speech in auditory narratives

    NARCIS (Netherlands)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in 'healthy' communication direct speech constructions contribute to the liveliness, and indirectly to

  20. Preschool speech intelligibility and vocabulary skills predict long-term speech and language outcomes following cochlear implantation in early childhood.

    Science.gov (United States)

    Castellanos, Irina; Kronenberger, William G; Beer, Jessica; Henning, Shirley C; Colson, Bethany G; Pisoni, David B

    2014-07-01

    Speech and language measures during grade school predict adolescent speech-language outcomes in children who receive cochlear implants (CIs), but no research has examined whether speech and language functioning at even younger ages is predictive of long-term outcomes in this population. The purpose of this study was to examine whether early preschool measures of speech and language performance predict speech-language functioning in long-term users of CIs. Early measures of speech intelligibility and receptive vocabulary (obtained during preschool ages of 3-6 years) in a sample of 35 prelingually deaf, early-implanted children predicted speech perception, language, and verbal working memory skills up to 18 years later. Age of onset of deafness and age at implantation added additional variance to preschool speech intelligibility in predicting some long-term outcome scores, but the relationship between preschool speech-language skills and later speech-language outcomes was not significantly attenuated by the addition of these hearing history variables. These findings suggest that speech and language development during the preschool years is predictive of long-term speech and language functioning in early-implanted, prelingually deaf children. As a result, measures of speech-language functioning at preschool ages can be used to identify and adjust interventions for very young CI users who may be at long-term risk for suboptimal speech and language outcomes.

  1. Speech Clarity Index (Ψ): A Distance-Based Speech Quality Indicator and Recognition Rate Prediction for Dysarthric Speakers with Cerebral Palsy

    Science.gov (United States)

    Kayasith, Prakasith; Theeramunkong, Thanaruk

    It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.

  2. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  3. Simultaneous natural speech and AAC interventions for children with childhood apraxia of speech: lessons from a speech-language pathologist focus group.

    Science.gov (United States)

    Oommen, Elizabeth R; McCarthy, John W

    2015-03-01

    In childhood apraxia of speech (CAS), children exhibit varying levels of speech intelligibility depending on the nature of errors in articulation and prosody. Augmentative and alternative communication (AAC) strategies are beneficial, and commonly adopted with children with CAS. This study focused on the decision-making process and strategies adopted by speech-language pathologists (SLPs) when simultaneously implementing interventions that focused on natural speech and AAC. Eight SLPs, with significant clinical experience in CAS and AAC interventions, participated in an online focus group. Thematic analysis revealed eight themes: key decision-making factors; treatment history and rationale; benefits; challenges; therapy strategies and activities; collaboration with team members; recommendations; and other comments. Results are discussed along with clinical implications and directions for future research.

  4. Speech Recognition on Mobile Devices

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    in the mobile context covering motivations, challenges, fundamental techniques and applications. Three ASR architectures are introduced: embedded speech recognition, distributed speech recognition and network speech recognition. Their pros and cons and implementation issues are discussed. Applications within......The enthusiasm of deploying automatic speech recognition (ASR) on mobile devices is driven both by remarkable advances in ASR technology and by the demand for efficient user interfaces on such devices as mobile phones and personal digital assistants (PDAs). This chapter presents an overview of ASR...

  5. Song and speech: examining the link between singing talent and speech imitation ability.

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory.

  6. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  7. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  8. Freedom of Speech Newsletter, September, 1975.

    Science.gov (United States)

    Allen, Winfred G., Jr., Ed.

    The Freedom of Speech Newsletter is the communication medium for the Freedom of Speech Interest Group of the Western Speech Communication Association. The newsletter contains such features as a statement of concern by the National Ad Hoc Committee Against Censorship; Reticence and Free Speech, an article by James F. Vickrey discussing the subtle…

  9. Automatic speech recognition used for evaluation of text-to-speech systems

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Nouza, J.; Vondra, Martin

    -, č. 5042 (2008), s. 136-148 ISSN 0302-9743 R&D Projects: GA AV ČR 1ET301710509; GA AV ČR 1QS108040569 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech recognition * speech processing Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  10. SynFace—Speech-Driven Facial Animation for Virtual Speech-Reading Support

    Directory of Open Access Journals (Sweden)

    Giampiero Salvi

    2009-01-01

    Full Text Available This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animated talking head. Firstly, we describe the system architecture, consisting of a 3D animated face model controlled from the speech input by a specifically optimised phonetic recogniser. Secondly, we report on speech intelligibility experiments with focus on multilinguality and robustness to audio quality. The system, already available for Swedish, English, and Flemish, was optimised for German and for Swedish wide-band speech quality available in TV, radio, and Internet communication. Lastly, the paper covers experiments with nonverbal motions driven from the speech signal. It is shown that turn-taking gestures can be used to affect the flow of human-human dialogues. We have focused specifically on two categories of cues that may be extracted from the acoustic signal: prominence/emphasis and interactional cues (turn-taking/back-channelling.

  11. The Effect of English Verbal Songs on Connected Speech Aspects of Adult English Learners’ Speech Production

    Directory of Open Access Journals (Sweden)

    Farshid Tayari Ashtiani

    2015-02-01

    Full Text Available The present study was an attempt to investigate the impact of English verbal songs on connected speech aspects of adult English learners’ speech production. 40 participants were selected based on the results of their performance in a piloted and validated version of NELSON test given to 60 intermediate English learners in a language institute in Tehran. Then they were equally distributed in two control and experimental groups and received a validated pretest of reading aloud and speaking in English. Afterward, the treatment was performed in 18 sessions by singing preselected songs culled based on some criteria such as popularity, familiarity, amount, and speed of speech delivery, etc. In the end, the posttests of reading aloud and speaking in English were administered. The results revealed that the treatment had statistically positive effects on the connected speech aspects of English learners’ speech production at statistical .05 level of significance. Meanwhile, the results represented that there was not any significant difference between the experimental group’s mean scores on the posttests of reading aloud and speaking. It was thus concluded that providing the EFL learners with English verbal songs could positively affect connected speech aspects of both modes of speech production, reading aloud and speaking. The Findings of this study have pedagogical implications for language teachers to be more aware and knowledgeable of the benefits of verbal songs to promote speech production of language learners in terms of naturalness and fluency. Keywords: English Verbal Songs, Connected Speech, Speech Production, Reading Aloud, Speaking

  12. A Measure of the Auditory-perceptual Quality of Strain from Electroglottographic Analysis of Continuous Dysphonic Speech: Application to Adductor Spasmodic Dysphonia.

    Science.gov (United States)

    Somanath, Keerthan; Mau, Ted

    2016-11-01

    (1) To develop an automated algorithm to analyze electroglottographic (EGG) signal in continuous dysphonic speech, and (2) to identify EGG waveform parameters that correlate with the auditory-perceptual quality of strain in the speech of patients with adductor spasmodic dysphonia (ADSD). Software development with application in a prospective controlled study. EGG was recorded from 12 normal speakers and 12 subjects with ADSD reading excerpts from the Rainbow Passage. Data were processed by a new algorithm developed with the specific goal of analyzing continuous dysphonic speech. The contact quotient, pulse width, a new parameter peak skew, and various contact closing slope quotient and contact opening slope quotient measures were extracted. EGG parameters were compared between normal and ADSD speech. Within the ADSD group, intra-subject comparison was also made between perceptually strained syllables and unstrained syllables. The opening slope quotient SO7525 distinguished strained syllables from unstrained syllables in continuous speech within individual subjects with ADSD. The standard deviations, but not the means, of contact quotient, EGGW50, peak skew, and SO7525 were different between normal and ADSD speakers. The strain-stress pattern in continuous speech can be visualized as color gradients based on the variation of EGG parameter values. EGG parameters may provide a within-subject measure of vocal strain and serve as a marker for treatment response. The addition of EGG to multidimensional assessment may lead to improved characterization of the voice disturbance in ADSD. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. An analysis of the masking of speech by competing speech using self-report data (L)

    OpenAIRE

    Agus, Trevor R.; Akeroyd, Michael A.; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the “Speech, Spatial, and Qualities of Hearing” scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol.43, 85–99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively ...

  14. Illustrated Speech Anatomy.

    Science.gov (United States)

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  15. Speech Entrainment Compensates for Broca's Area Damage

    Science.gov (United States)

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-01-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to speech entrainment. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during speech entrainment versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of speech entrainment to improve speech production and may help select patients for speech entrainment treatment. PMID:25989443

  16. Lingual nerve injury following surgical removal of mandibular third molar

    Directory of Open Access Journals (Sweden)

    Abduljaleel Azad Samad

    2017-12-01

    Full Text Available Background and objective: The close proximity of lingual nerve in relation to the lingual cortical bone of the posterior mandibular third molar is clinically important because lingual nerve may be subjected to trauma during surgical removal of the impacted lower third molar. This prospective study aimed to evaluate the incidence of lingual nerve paresthesia following surgical removal of mandibular third molar in College of Dentistry, Hawler Medical University. Methods: A total of 116 third molars surgery were carried out under local anesthesia for 116 patients for removal of lower mandibular teeth Using Terence Ward's incision made in all cases, and after that, the buccal flap was reflected, lingual tissues had been retracted during bone removal with a periosteal elevator. The sensory disturbance was evaluated on the 7th postoperative day by standard questioning the patients: “Do you have any unusual feeling in your tongue, lingual gingiva and mucosa of the floor of the mouth?" Results: One patient experienced sensory disturbance, the lingual nerve paresthesia incidence was 0.9% as a transient sensory disturbance, while no patient of permanent sensory disturbance. Conclusion: The incidence of injury to the lingual nerve can be minimized by careful clinical evaluation, surgeon’s experience, surgical approach and knowledge about anatomical landmarks during surgical removal of an impacted lower third molar tooth.

  17. Patterns of poststroke brain damage that predict speech production errors in apraxia of speech and aphasia dissociate.

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-06-01

    Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions on whether AOS emerges from a unique pattern of brain damage or as a subelement of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The AOS Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with both AOS and aphasia. Localized brain damage was identified using structural magnetic resonance imaging, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS or aphasia, and brain damage. The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS or aphasia were associated with damage to the temporal lobe and the inferior precentral frontal regions. AOS likely occurs in conjunction with aphasia because of the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. © 2015 American Heart Association, Inc.

  18. A NOVEL APPROACH TO STUTTERED SPEECH CORRECTION

    Directory of Open Access Journals (Sweden)

    Alim Sabur Ajibola

    2016-06-01

    Full Text Available Stuttered speech is a dysfluency rich speech, more prevalent in males than females. It has been associated with insufficient air pressure or poor articulation, even though the root causes are more complex. The primary features include prolonged speech and repetitive speech, while some of its secondary features include, anxiety, fear, and shame. This study used LPC analysis and synthesis algorithms to reconstruct the stuttered speech. The results were evaluated using cepstral distance, Itakura-Saito distance, mean square error, and likelihood ratio. These measures implied perfect speech reconstruction quality. ASR was used for further testing, and the results showed that all the reconstructed speech samples were perfectly recognized while only three samples of the original speech were perfectly recognized.

  19. Prisoner Fasting as Symbolic Speech: The Ultimate Speech-Action Test.

    Science.gov (United States)

    Sneed, Don; Stonecipher, Harry W.

    The ultimate test of the speech-action dichotomy, as it relates to symbolic speech to be considered by the courts, may be the fasting of prison inmates who use hunger strikes to protest the conditions of their confinement or to make political statements. While hunger strikes have been utilized by prisoners for years as a means of protest, it was…

  20. Childhood apraxia of speech and multiple phonological disorders in Cairo-Egyptian Arabic speaking children: language, speech, and oro-motor differences.

    Science.gov (United States)

    Aziz, Azza Adel; Shohdi, Sahar; Osman, Dalia Mostafa; Habib, Emad Iskander

    2010-06-01

    Childhood apraxia of speech is a neurological childhood speech-sound disorder in which the precision and consistency of movements underlying speech are impaired in the absence of neuromuscular deficits. Children with childhood apraxia of speech and those with multiple phonological disorder share some common phonological errors that can be misleading in diagnosis. This study posed a question about a possible significant difference in language, speech and non-speech oral performances between children with childhood apraxia of speech, multiple phonological disorder and normal children that can be used for a differential diagnostic purpose. 30 pre-school children between the ages of 4 and 6 years served as participants. Each of these children represented one of 3 possible subject-groups: Group 1: multiple phonological disorder; Group 2: suspected cases of childhood apraxia of speech; Group 3: control group with no communication disorder. Assessment procedures included: parent interviews; testing of non-speech oral motor skills and testing of speech skills. Data showed that children with suspected childhood apraxia of speech showed significantly lower language score only in their expressive abilities. Non-speech tasks did not identify significant differences between childhood apraxia of speech and multiple phonological disorder groups except for those which required two sequential motor performances. In speech tasks, both consonant and vowel accuracy were significantly lower and inconsistent in childhood apraxia of speech group than in the multiple phonological disorder group. Syllable number, shape and sequence accuracy differed significantly in the childhood apraxia of speech group than the other two groups. In addition, children with childhood apraxia of speech showed greater difficulty in processing prosodic features indicating a clear need to address these variables for differential diagnosis and treatment of children with childhood apraxia of speech. Copyright (c

  1. Individual differneces in degraded speech perception

    Science.gov (United States)

    Carbonell, Kathy M.

    One of the lasting concerns in audiology is the unexplained individual differences in speech perception performance even for individuals with similar audiograms. One proposal is that there are cognitive/perceptual individual differences underlying this vulnerability and that these differences are present in normal hearing (NH) individuals but do not reveal themselves in studies that use clear speech produced in quiet (because of a ceiling effect). However, previous studies have failed to uncover cognitive/perceptual variables that explain much of the variance in NH performance on more challenging degraded speech tasks. This lack of strong correlations may be due to either examining the wrong measures (e.g., working memory capacity) or to there being no reliable differences in degraded speech performance in NH listeners (i.e., variability in performance is due to measurement noise). The proposed project has 3 aims; the first, is to establish whether there are reliable individual differences in degraded speech performance for NH listeners that are sustained both across degradation types (speech in noise, compressed speech, noise-vocoded speech) and across multiple testing sessions. The second aim is to establish whether there are reliable differences in NH listeners' ability to adapt their phonetic categories based on short-term statistics both across tasks and across sessions; and finally, to determine whether performance on degraded speech perception tasks are correlated with performance on phonetic adaptability tasks, thus establishing a possible explanatory variable for individual differences in speech perception for NH and hearing impaired listeners.

  2. Collective speech acts

    NARCIS (Netherlands)

    Meijers, A.W.M.; Tsohatzidis, S.L.

    2007-01-01

    From its early development in the 1960s, speech act theory always had an individualistic orientation. It focused exclusively on speech acts performed by individual agents. Paradigmatic examples are ‘I promise that p’, ‘I order that p’, and ‘I declare that p’. There is a single speaker and a single

  3. Commencement Speech as a Hybrid Polydiscursive Practice

    Directory of Open Access Journals (Sweden)

    Светлана Викторовна Иванова

    2017-12-01

    Full Text Available Discourse and media communication researchers pay attention to the fact that popular discursive and communicative practices have a tendency to hybridization and convergence. Discourse which is understood as language in use is flexible. Consequently, it turns out that one and the same text can represent several types of discourses. A vivid example of this tendency is revealed in American commencement speech / commencement address / graduation speech. A commencement speech is a speech university graduates are addressed with which in compliance with the modern trend is delivered by outstanding media personalities (politicians, athletes, actors, etc.. The objective of this study is to define the specificity of the realization of polydiscursive practices within commencement speech. The research involves discursive, contextual, stylistic and definitive analyses. Methodologically the study is based on the discourse analysis theory, in particular the notion of a discursive practice as a verbalized social practice makes up the conceptual basis of the research. This research draws upon a hundred commencement speeches delivered by prominent representatives of American society since 1980s till now. In brief, commencement speech belongs to institutional discourse public speech embodies. Commencement speech institutional parameters are well represented in speeches delivered by people in power like American and university presidents. Nevertheless, as the results of the research indicate commencement speech institutional character is not its only feature. Conceptual information analysis enables to refer commencement speech to didactic discourse as it is aimed at teaching university graduates how to deal with challenges life is rich in. Discursive practices of personal discourse are also actively integrated into the commencement speech discourse. More than that, existential discursive practices also find their way into the discourse under study. Commencement

  4. The effectiveness of Speech-Music Therapy for Aphasia (SMTA) in five speakers with Apraxia of Speech and aphasia

    NARCIS (Netherlands)

    Hurkmans, Joost; Jonkers, Roel; de Bruijn, Madeleen; Boonstra, Anne M.; Hartman, Paul P.; Arendzen, Hans; Reinders - Messelink, Heelen

    2015-01-01

    Background: Several studies using musical elements in the treatment of neurological language and speech disorders have reported improvement of speech production. One such programme, Speech-Music Therapy for Aphasia (SMTA), integrates speech therapy and music therapy (MT) to treat the individual with

  5. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    2016-08-26

    ; speech-to-speech translation; language identification. ... interest owing to two strong driving forces. Firstly, technical advances in speech recognition and synthesis are posing new challenges and opportunities to researchers.

  6. Do long-term tongue piercings affect speech quality?

    Science.gov (United States)

    Heinen, Esther; Birkholz, Peter; Willmes, Klaus; Neuschaefer-Rube, Christiane

    2017-10-01

    To explore possible effects of tongue piercing on perceived speech quality. Using a quasi-experimental design, we analyzed the effect of tongue piercing on speech in a perception experiment. Samples of spontaneous speech and read speech were recorded from 20 long-term pierced and 20 non-pierced individuals (10 males, 10 females each). The individuals having a tongue piercing were recorded with attached and removed piercing. The audio samples were blindly rated by 26 female and 20 male laypersons and by 5 female speech-language pathologists with regard to perceived speech quality along 5 dimensions: speech clarity, speech rate, prosody, rhythm and fluency. We found no statistically significant differences for any of the speech quality dimensions between the pierced and non-pierced individuals, neither for the read nor for the spontaneous speech. In addition, neither length nor position of piercing had a significant effect on speech quality. The removal of tongue piercings had no effects on speech performance either. Rating differences between laypersons and speech-language pathologists were not dependent on the presence of a tongue piercing. People are able to perfectly adapt their articulation to long-term tongue piercings such that their speech quality is not perceptually affected.

  7. Patterns of Post-Stroke Brain Damage that Predict Speech Production Errors in Apraxia of Speech and Aphasia Dissociate

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-01-01

    Background and Purpose Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions regarding if AOS emerges from a unique pattern of brain damage or as a sub-element of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Methods Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The Apraxia of Speech Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with AOS and/or aphasia. Localized brain damage was identified using structural MRI, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS and/or aphasia, and brain damage. Results The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS and/or aphasia were associated with damage to the temporal lobe and the inferior pre-central frontal regions. Conclusion AOS likely occurs in conjunction with aphasia due to the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. PMID:25908457

  8. Progressive apraxia of speech as a window into the study of speech planning processes.

    Science.gov (United States)

    Laganaro, Marina; Croisier, Michèle; Bagou, Odile; Assal, Frédéric

    2012-09-01

    We present a 3-year follow-up study of a patient with progressive apraxia of speech (PAoS), aimed at investigating whether the theoretical organization of phonetic encoding is reflected in the progressive disruption of speech. As decreased speech rate was the most striking pattern of disruption during the first 2 years, durational analyses were carried out longitudinally on syllables excised from spontaneous, repetition and reading speech samples. The crucial result of the present study is the demonstration of an effect of syllable frequency on duration: the progressive disruption of articulation rate did not affect all syllables in the same way, but followed a gradient that was function of the frequency of use of syllable-sized motor programs. The combination of data from this case of PAoS with previous psycholinguistic and neurolinguistic data, points to a frequency organization of syllable-sized speech-motor plans. In this study we also illustrate how studying PAoS can be exploited in theoretical and clinical investigations of phonetic encoding as it represents a unique opportunity to investigate speech while it progressively disrupts. Copyright © 2011 Elsevier Srl. All rights reserved.

  9. Musicians do not benefit from differences in fundamental frequency when listening to speech in competing speech backgrounds

    DEFF Research Database (Denmark)

    Madsen, Sara Miay Kim; Whiteford, Kelly L.; Oxenham, Andrew J.

    2017-01-01

    Recent studies disagree on whether musicians have an advantage over non-musicians in understanding speech in noise. However, it has been suggested that musicians may be able to use diferences in fundamental frequency (F0) to better understand target speech in the presence of interfering talkers....... Here we studied a relatively large (N=60) cohort of young adults, equally divided between nonmusicians and highly trained musicians, to test whether the musicians were better able to understand speech either in noise or in a two-talker competing speech masker. The target speech and competing speech...... were presented with either their natural F0 contours or on a monotone F0, and the F0 diference between the target and masker was systematically varied. As expected, speech intelligibility improved with increasing F0 diference between the target and the two-talker masker for both natural and monotone...

  10. Novel Techniques for Dialectal Arabic Speech Recognition

    CERN Document Server

    Elmahdy, Mohamed; Minker, Wolfgang

    2012-01-01

    Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and M...

  11. Speech and Communication Disorders

    Science.gov (United States)

    ... to being completely unable to speak or understand speech. Causes include Hearing disorders and deafness Voice problems, ... or those caused by cleft lip or palate Speech problems like stuttering Developmental disabilities Learning disorders Autism ...

  12. Speech of people with autism: Echolalia and echolalic speech

    OpenAIRE

    Błeszyński, Jacek Jarosław

    2013-01-01

    Speech of people with autism is recognised as one of the basic diagnostic, therapeutic and theoretical problems. One of the most common symptoms of autism in children is echolalia, described here as being of different types and severity. This paper presents the results of studies into different levels of echolalia, both in normally developing children and in children diagnosed with autism, discusses the differences between simple echolalia and echolalic speech - which can be considered to b...

  13. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: Introduction

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The goal of this article is to introduce the pause marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech (CAS) from speech delay.

  14. A speech production model including the nasal Cavity: A novel approach to articulatory analysis of speech signals

    DEFF Research Database (Denmark)

    Olesen, Morten

    In order to obtain articulatory analysis of speech production the model is improved. the standard model, as used in LPC analysis, to a large extent only models the acoustic properties of speech signal as opposed to articulatory modelling of the speech production. In spite of this the LPC model...... is by far the most widely used model in speech technology....

  15. Successful and rapid response of speech bulb reduction program combined with speech therapy in velopharyngeal dysfunction: a case report.

    Science.gov (United States)

    Shin, Yu-Jeong; Ko, Seung-O

    2015-12-01

    Velopharyngeal dysfunction in cleft palate patients following the primary palate repair may result in nasal air emission, hypernasality, articulation disorder and poor intelligibility of speech. Among conservative treatment methods, speech aid prosthesis combined with speech therapy is widely used method. However because of its long time of treatment more than a year and low predictability, some clinicians prefer a surgical intervention. Thus, the purpose of this report was to increase an attention on the effectiveness of speech aid prosthesis by introducing a case that was successfully treated. In this clinical report, speech bulb reduction program with intensive speech therapy was applied for a patient with velopharyngeal dysfunction and it was rapidly treated by 5months which was unusually short period for speech aid therapy. Furthermore, advantages of pre-operative speech aid therapy were discussed.

  16. Speech Intelligibility Evaluation for Mobile Phones

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Cubick, Jens; Dau, Torsten

    2015-01-01

    In the development process of modern telecommunication systems, such as mobile phones, it is common practice to use computer models to objectively evaluate the transmission quality of the system, instead of time-consuming perceptual listening tests. Such models have typically focused on the quality...... of the transmitted speech, while little or no attention has been provided to speech intelligibility. The present study investigated to what extent three state-of-the art speech intelligibility models could predict the intelligibility of noisy speech transmitted through mobile phones. Sentences from the Danish...... Dantale II speech material were mixed with three different kinds of background noise, transmitted through three different mobile phones, and recorded at the receiver via a local network simulator. The speech intelligibility of the transmitted sentences was assessed by six normal-hearing listeners...

  17. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  18. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  19. Radiological evaluation of esophageal speech on total laryngectomee

    International Nuclear Information System (INIS)

    Chung, Tae Sub; Suh, Jung Ho; Kim, Dong Ik; Kim, Gwi Eon; Hong, Won Phy; Lee, Won Sang

    1988-01-01

    Total laryngectomee requires some form of alaryngeal speech for communication. Generally, esophageal speech is regarded as the most available and comfortable technique for alaryngeal speech. But esophageal speech is difficult to train, so many patients are unable to attain esophageal speech for communication. To understand mechanism of esophageal of esophageal speech on total laryngectomee, evaluation of anatomical change of the pharyngoesophageal segment is very important. We used video fluoroscopy for evaluation of pharyngesophageal segment during esophageal speech. Eighteen total laryngectomees were evaluated with video fluoroscopy from Dec. 1986 to May 1987 at Y.U.M.C. Our results were as follows: 1. Peseudoglottis is the most important factor for esophageal speech, which is visualized in 7 cases among 8 cases of excellent esophageal speech group. 2. Two cases of longer A-P diameter at the pseudoglottis have the best quality of esophageal speech than others. 3. Two cases of mucosal vibration at the pharyngoesophageal segment can make excellent esophageal speech. 4. The cases of failed esophageal speech are poor aerophagia in 6 cases, abscence of pseudoglottis in 4 cases and poor air ejection in 3 cases. 5. Aerophagia synchronizes with diaphragmatic motion in 8 cases of excellent esophageal speech.

  20. Automatic Speech Recognition Systems for the Evaluation of Voice and Speech Disorders in Head and Neck Cancer

    Directory of Open Access Journals (Sweden)

    Andreas Maier

    2010-01-01

    Full Text Available In patients suffering from head and neck cancer, speech intelligibility is often restricted. For assessment and outcome measurements, automatic speech recognition systems have previously been shown to be appropriate for objective and quick evaluation of intelligibility. In this study we investigate the applicability of the method to speech disorders caused by head and neck cancer. Intelligibility was quantified by speech recognition on recordings of a standard text read by 41 German laryngectomized patients with cancer of the larynx or hypopharynx and 49 German patients who had suffered from oral cancer. The speech recognition provides the percentage of correctly recognized words of a sequence, that is, the word recognition rate. Automatic evaluation was compared to perceptual ratings by a panel of experts and to an age-matched control group. Both patient groups showed significantly lower word recognition rates than the control group. Automatic speech recognition yielded word recognition rates which complied with experts' evaluation of intelligibility on a significant level. Automatic speech recognition serves as a good means with low effort to objectify and quantify the most important aspect of pathologic speech—the intelligibility. The system was successfully applied to voice and speech disorders.

  1. On speech recognition during anaesthesia

    DEFF Research Database (Denmark)

    Alapetite, Alexandre

    2007-01-01

    This PhD thesis in human-computer interfaces (informatics) studies the case of the anaesthesia record used during medical operations and the possibility to supplement it with speech recognition facilities. Problems and limitations have been identified with the traditional paper-based anaesthesia...... and inaccuracies in the anaesthesia record. Supplementing the electronic anaesthesia record interface with speech input facilities is proposed as one possible solution to a part of the problem. The testing of the various hypotheses has involved the development of a prototype of an electronic anaesthesia record...... interface with speech input facilities in Danish. The evaluation of the new interface was carried out in a full-scale anaesthesia simulator. This has been complemented by laboratory experiments on several aspects of speech recognition for this type of use, e.g. the effects of noise on speech recognition...

  2. 78 FR 63152 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2013-10-23

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... for telecommunications relay services (TRS) by eliminating standards for Internet-based relay services... comments, identified by CG Docket No. 03-123, by any of the following methods: Electronic Filers: Comments...

  3. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, A.; Moses, H. R.

    2016-01-01

    Currently on the International Space Station (ISS) and other space vehicles Caution & Warning (C&W) alerts are represented with various auditory tones that correspond to the type of event. This system relies on the crew's ability to remember what each tone represents in a high stress, high workload environment when responding to the alert. Furthermore, crew receive a year or more in advance of the mission that makes remembering the semantic meaning of the alerts more difficult. The current system works for missions conducted close to Earth where ground operators can assist as needed. On long duration missions, however, they will need to work off-nominal events autonomously. There is evidence that speech alarms may be easier and faster to recognize, especially during an off-nominal event. The Information Presentation Directed Research Project (FY07-FY09) funded by the Human Research Program included several studies investigating C&W alerts. The studies evaluated tone alerts currently in use with NASA flight deck displays along with candidate speech alerts. A follow-on study used four types of speech alerts to investigate how quickly various types of auditory alerts with and without a speech component - either at the beginning or at the end of the tone - can be identified. Even though crew were familiar with the tone alert from training or direct mission experience, alerts starting with a speech component were identified faster than alerts starting with a tone. The current study replicated the results from the previous study in a more rigorous experimental design to determine if the candidate speech alarms are ready for transition to operations or if more research is needed. Four types of alarms (caution, warning, fire, and depressurization) were presented to participants in both tone and speech formats in laboratory settings and later in the Human Exploration Research Analog (HERA). In the laboratory study, the alerts were presented by software and participants were

  4. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  5. Visualizing structures of speech expressiveness

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Jensen, Karl Kristoffer; Graugaard, Lars

    2008-01-01

    Speech is both beautiful and informative. In this work, a conceptual study of the speech, through investigation of the tower of Babel, the archetypal phonemes, and a study of the reasons of uses of language is undertaken in order to create an artistic work investigating the nature of speech. The ....... The artwork is presented at the Re:New festival in May 2008....

  6. A Clinician Survey of Speech and Non-Speech Characteristics of Neurogenic Stuttering

    Science.gov (United States)

    Theys, Catherine; van Wieringen, Astrid; De Nil, Luc F.

    2008-01-01

    This study presents survey data on 58 Dutch-speaking patients with neurogenic stuttering following various neurological injuries. Stroke was the most prevalent cause of stuttering in our patients, followed by traumatic brain injury, neurodegenerative diseases, and other causes. Speech and non-speech characteristics were analyzed separately for…

  7. Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders

    CERN Document Server

    Baghai-Ravary, Ladan

    2013-01-01

    Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders provides a survey of methods designed to aid clinicians in the diagnosis and monitoring of speech disorders such as dysarthria and dyspraxia, with an emphasis on the signal processing techniques, statistical validity of the results presented in the literature, and the appropriateness of methods that do not require specialized equipment, rigorously controlled recording procedures or highly skilled personnel to interpret results. Such techniques offer the promise of a simple and cost-effective, yet objective, assessment of a range of medical conditions, which would be of great value to clinicians. The ideal scenario would begin with the collection of examples of the clients’ speech, either over the phone or using portable recording devices operated by non-specialist nursing staff. The recordings could then be analyzed initially to aid diagnosis of conditions, and subsequently to monitor the clients’ progress and res...

  8. Temporal modulations in speech and music.

    Science.gov (United States)

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. The effects of Thalamic Deep Brain Stimulation on speech dynamics in patients with Essential Tremor: An articulographic study.

    Directory of Open Access Journals (Sweden)

    Doris Mücke

    Full Text Available Acoustic studies have revealed that patients with Essential Tremor treated with thalamic Deep Brain Stimulation (DBS may suffer from speech deterioration in terms of imprecise oral articulation and reduced voicing control. Based on the acoustic signal one cannot infer, however, whether this deterioration is due to a general slowing down of the speech motor system (e.g., a target undershoot of a desired articulatory goal resulting from being too slow or disturbed coordination (e.g., a target undershoot caused by problems with the relative phasing of articulatory movements. To elucidate this issue further, we here investigated both acoustics and articulatory patterns of the labial and lingual system using Electromagnetic Articulography (EMA in twelve Essential Tremor patients treated with thalamic DBS and twelve age- and sex-matched controls. By comparing patients with activated (DBS-ON and inactivated stimulation (DBS-OFF with control speakers, we show that critical changes in speech dynamics occur on two levels: With inactivated stimulation (DBS-OFF, patients showed coordination problems of the labial and lingual system in terms of articulatory imprecision and slowness. These effects of articulatory discoordination worsened under activated stimulation, accompanied by an additional overall slowing down of the speech motor system. This leads to a poor performance of syllables on the acoustic surface, reflecting an aggravation either of pre-existing cerebellar deficits and/or the affection of the upper motor fibers of the internal capsule.

  10. Song and speech: examining the link between singing talent and speech imitation ability

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M.

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of “speech” on the productive level and “music” on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory. PMID:24319438

  11. Dysfluencies in the speech of adults with intellectual disabilities and reported speech difficulties.

    Science.gov (United States)

    Coppens-Hofman, Marjolein C; Terband, Hayo R; Maassen, Ben A M; van Schrojenstein Lantman-De Valk, Henny M J; van Zaalen-op't Hof, Yvonne; Snik, Ad F M

    2013-01-01

    In individuals with an intellectual disability, speech dysfluencies are more common than in the general population. In clinical practice, these fluency disorders are generally diagnosed and treated as stuttering rather than cluttering. To characterise the type of dysfluencies in adults with intellectual disabilities and reported speech difficulties with an emphasis on manifestations of stuttering and cluttering, which distinction is to help optimise treatment aimed at improving fluency and intelligibility. The dysfluencies in the spontaneous speech of 28 adults (18-40 years; 16 men) with mild and moderate intellectual disabilities (IQs 40-70), who were characterised as poorly intelligible by their caregivers, were analysed using the speech norms for typically developing adults and children. The speakers were subsequently assigned to different diagnostic categories by relating their resulting dysfluency profiles to mean articulatory rate and articulatory rate variability. Twenty-two (75%) of the participants showed clinically significant dysfluencies, of which 21% were classified as cluttering, 29% as cluttering-stuttering and 25% as clear cluttering at normal articulatory rate. The characteristic pattern of stuttering did not occur. The dysfluencies in the speech of adults with intellectual disabilities and poor intelligibility show patterns that are specific for this population. Together, the results suggest that in this specific group of dysfluent speakers interventions should be aimed at cluttering rather than stuttering. The reader will be able to (1) describe patterns of dysfluencies in the speech of adults with intellectual disabilities that are specific for this group of people, (2) explain that a high rate of dysfluencies in speech is potentially a major determiner of poor intelligibility in adults with ID and (3) describe suggestions for intervention focusing on cluttering rather than stuttering in dysfluent speakers with ID. Copyright © 2013 Elsevier Inc

  12. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: III. Theoretical Coherence of the Pause Marker with Speech Processing Deficits in Childhood Apraxia of Speech

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. Method: PM and other…

  13. Speech and language support: How physicians can identify and treat speech and language delays in the office setting.

    Science.gov (United States)

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society's Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children's speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation.

  14. Abortion and compelled physician speech.

    Science.gov (United States)

    Orentlicher, David

    2015-01-01

    Informed consent mandates for abortion providers may infringe the First Amendment's freedom of speech. On the other hand, they may reinforce the physician's duty to obtain informed consent. Courts can promote both doctrines by ensuring that compelled physician speech pertains to medical facts about abortion rather than abortion ideology and that compelled speech is truthful and not misleading. © 2015 American Society of Law, Medicine & Ethics, Inc.

  15. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  16. Effect of speech rate variation on acoustic phone stability in Afrikaans speech recognition

    CSIR Research Space (South Africa)

    Badenhorst, JAC

    2007-11-01

    Full Text Available The authors analyse the effect of speech rate variation on Afrikaans phone stability from an acoustic perspective. Specifically they introduce two techniques for the acoustic analysis of speech rate variation, apply these techniques to an Afrikaans...

  17. Speech, "Inner Speech," and the Development of Short-Term Memory: Effects of Picture-Labeling on Recall.

    Science.gov (United States)

    Hitch, Graham J.; And Others

    1991-01-01

    Reports on experiments to determine effects of overt speech on children's use of inner speech in short-term memory. Word length and phonemic similarity had greater effects on older children and when pictures were labeled at presentation. Suggests that speaking or listening to speech activates an internal articulatory loop. (Author/GH)

  18. Phonetic recalibration of speech by text

    NARCIS (Netherlands)

    Keetels, M.N.; Schakel, L.; de Bonte, M.; Vroomen, J.

    2016-01-01

    Listeners adjust their phonetic categories to cope with variations in the speech signal (phonetic recalibration). Previous studies have shown that lipread speech (and word knowledge) can adjust the perception of ambiguous speech and can induce phonetic adjustments (Bertelson, Vroomen, & de Gelder in

  19. Epoch-based analysis of speech signals

    Indian Academy of Sciences (India)

    on speech production characteristics, but also helps in accurate analysis of speech. .... include time delay estimation, speech enhancement from single and multi- ...... log. (. E[k]. ∑K−1 l=0. E[l]. ) ,. (7) where K is the number of samples in the ...

  20. Automatic Speech Recognition Systems for the Evaluation of Voice and Speech Disorders in Head and Neck Cancer

    OpenAIRE

    Andreas Maier; Tino Haderlein; Florian Stelzle; Elmar Nöth; Emeka Nkenke; Frank Rosanowski; Anne Schützenberger; Maria Schuster

    2010-01-01

    In patients suffering from head and neck cancer, speech intelligibility is often restricted. For assessment and outcome measurements, automatic speech recognition systems have previously been shown to be appropriate for objective and quick evaluation of intelligibility. In this study we investigate the applicability of the method to speech disorders caused by head and neck cancer. Intelligibility was quantified by speech recognition on recordings of a standard text read by 41 German laryngect...

  1. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  2. Speech-Language Dissociations, Distractibility, and Childhood Stuttering

    Science.gov (United States)

    Conture, Edward G.; Walden, Tedra A.; Lambert, Warren E.

    2015-01-01

    Purpose This study investigated the relation among speech-language dissociations, attentional distractibility, and childhood stuttering. Method Participants were 82 preschool-age children who stutter (CWS) and 120 who do not stutter (CWNS). Correlation-based statistics (Bates, Appelbaum, Salcedo, Saygin, & Pizzamiglio, 2003) identified dissociations across 5 norm-based speech-language subtests. The Behavioral Style Questionnaire Distractibility subscale measured attentional distractibility. Analyses addressed (a) between-groups differences in the number of children exhibiting speech-language dissociations; (b) between-groups distractibility differences; (c) the relation between distractibility and speech-language dissociations; and (d) whether interactions between distractibility and dissociations predicted the frequency of total, stuttered, and nonstuttered disfluencies. Results More preschool-age CWS exhibited speech-language dissociations compared with CWNS, and more boys exhibited dissociations compared with girls. In addition, male CWS were less distractible than female CWS and female CWNS. For CWS, but not CWNS, less distractibility (i.e., greater attention) was associated with more speech-language dissociations. Last, interactions between distractibility and dissociations did not predict speech disfluencies in CWS or CWNS. Conclusions The present findings suggest that for preschool-age CWS, attentional processes are associated with speech-language dissociations. Future investigations are warranted to better understand the directionality of effect of this association (e.g., inefficient attentional processes → speech-language dissociations vs. inefficient attentional processes ← speech-language dissociations). PMID:26126203

  3. International aspirations for speech-language pathologists' practice with multilingual children with speech sound disorders: development of a position paper.

    Science.gov (United States)

    McLeod, Sharynne; Verdon, Sarah; Bowen, Caroline

    2013-01-01

    A major challenge for the speech-language pathology profession in many cultures is to address the mismatch between the "linguistic homogeneity of the speech-language pathology profession and the linguistic diversity of its clientele" (Caesar & Kohler, 2007, p. 198). This paper outlines the development of the Multilingual Children with Speech Sound Disorders: Position Paper created to guide speech-language pathologists' (SLPs') facilitation of multilingual children's speech. An international expert panel was assembled comprising 57 researchers (SLPs, linguists, phoneticians, and speech scientists) with knowledge about multilingual children's speech, or children with speech sound disorders. Combined, they had worked in 33 countries and used 26 languages in professional practice. Fourteen panel members met for a one-day workshop to identify key points for inclusion in the position paper. Subsequently, 42 additional panel members participated online to contribute to drafts of the position paper. A thematic analysis was undertaken of the major areas of discussion using two data sources: (a) face-to-face workshop transcript (133 pages) and (b) online discussion artifacts (104 pages). Finally, a moderator with international expertise in working with children with speech sound disorders facilitated the incorporation of the panel's recommendations. The following themes were identified: definitions, scope, framework, evidence, challenges, practices, and consideration of a multilingual audience. The resulting position paper contains guidelines for providing services to multilingual children with speech sound disorders (http://www.csu.edu.au/research/multilingual-speech/position-paper). The paper is structured using the International Classification of Functioning, Disability and Health: Children and Youth Version (World Health Organization, 2007) and incorporates recommendations for (a) children and families, (b) SLPs' assessment and intervention, (c) SLPs' professional

  4. Free Speech. No. 38.

    Science.gov (United States)

    Kane, Peter E., Ed.

    This issue of "Free Speech" contains the following articles: "Daniel Schoor Relieved of Reporting Duties" by Laurence Stern, "The Sellout at CBS" by Michael Harrington, "Defending Dan Schorr" by Tome Wicker, "Speech to the Washington Press Club, February 25, 1976" by Daniel Schorr, "Funds…

  5. APPRECIATING SPEECH THROUGH GAMING

    Directory of Open Access Journals (Sweden)

    Mario T Carreon

    2014-06-01

    Full Text Available This paper discusses the Speech and Phoneme Recognition as an Educational Aid for the Deaf and Hearing Impaired (SPREAD application and the ongoing research on its deployment as a tool for motivating deaf and hearing impaired students to learn and appreciate speech. This application uses the Sphinx-4 voice recognition system to analyze the vocalization of the student and provide prompt feedback on their pronunciation. The packaging of the application as an interactive game aims to provide additional motivation for the deaf and hearing impaired student through visual motivation for them to learn and appreciate speech.

  6. Global Freedom of Speech

    DEFF Research Database (Denmark)

    Binderup, Lars Grassme

    2007-01-01

    , as opposed to a legal norm, that curbs exercises of the right to free speech that offend the feelings or beliefs of members from other cultural groups. The paper rejects the suggestion that acceptance of such a norm is in line with liberal egalitarian thinking. Following a review of the classical liberal...... egalitarian reasons for free speech - reasons from overall welfare, from autonomy and from respect for the equality of citizens - it is argued that these reasons outweigh the proposed reasons for curbing culturally offensive speech. Currently controversial cases such as that of the Danish Cartoon Controversy...

  7. Extensions to the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…

  8. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech.

    Science.gov (United States)

    Dick, Anthony Steven; Mok, Eva H; Raja Beharelle, Anjali; Goldin-Meadow, Susan; Small, Steven L

    2014-03-01

    In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Copyright © 2012 Wiley Periodicals, Inc.

  9. Freedom of racist speech: Ego and expressive threats.

    Science.gov (United States)

    White, Mark H; Crandall, Christian S

    2017-09-01

    Do claims of "free speech" provide cover for prejudice? We investigate whether this defense of racist or hate speech serves as a justification for prejudice. In a series of 8 studies (N = 1,624), we found that explicit racial prejudice is a reliable predictor of the "free speech defense" of racist expression. Participants endorsed free speech values for singing racists songs or posting racist comments on social media; people high in prejudice endorsed free speech more than people low in prejudice (meta-analytic r = .43). This endorsement was not principled-high levels of prejudice did not predict endorsement of free speech values when identical speech was directed at coworkers or the police. Participants low in explicit racial prejudice actively avoided endorsing free speech values in racialized conditions compared to nonracial conditions, but participants high in racial prejudice increased their endorsement of free speech values in racialized conditions. Three experiments failed to find evidence that defense of racist speech by the highly prejudiced was based in self-relevant or self-protective motives. Two experiments found evidence that the free speech argument protected participants' own freedom to express their attitudes; the defense of other's racist speech seems motivated more by threats to autonomy than threats to self-regard. These studies serve as an elaboration of the Justification-Suppression Model (Crandall & Eshleman, 2003) of prejudice expression. The justification of racist speech by endorsing fundamental political values can serve to buffer racial and hate speech from normative disapproval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    International Nuclear Information System (INIS)

    Holzrichter, J.F.; Ng, L.C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs

  11. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    Science.gov (United States)

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  12. Speech and language support: How physicians can identify and treat speech and language delays in the office setting

    Science.gov (United States)

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. The tool aimed to help physicians achieve three main goals: early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society’s Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children’s speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation. PMID:24627648

  13. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  14. Weightlessness - A case history. [for Skylab 2 crewmen

    Science.gov (United States)

    Kerwin, J. P.

    1975-01-01

    A review of the average bodily systems functioning aboard Skylab II after 20 days of weightlessness is presented. Condition of eyes, ears, nose and throat, gastrointestinal tract, vestibular organs, cardiovascular system, musculoskeletal system, sleep, general appearance, skin, abdomen, and extremities is summarized. The general health of the crewmen is good, although there are some slight anomalies, such as weight loss, dry skin, nasal speech, and paresthesia of the soles of the feet.

  15. Recent advances in nonlinear speech processing

    CERN Document Server

    Faundez-Zanuy, Marcos; Esposito, Antonietta; Cordasco, Gennaro; Drugman, Thomas; Solé-Casals, Jordi; Morabito, Francesco

    2016-01-01

    This book presents recent advances in nonlinear speech processing beyond nonlinear techniques. It shows that it exploits heuristic and psychological models of human interaction in order to succeed in the implementations of socially believable VUIs and applications for human health and psychological support. The book takes into account the multifunctional role of speech and what is “outside of the box” (see Björn Schuller’s foreword). To this aim, the book is organized in 6 sections, each collecting a small number of short chapters reporting advances “inside” and “outside” themes related to nonlinear speech research. The themes emphasize theoretical and practical issues for modelling socially believable speech interfaces, ranging from efforts to capture the nature of sound changes in linguistic contexts and the timing nature of speech; labors to identify and detect speech features that help in the diagnosis of psychological and neuronal disease, attempts to improve the effectiveness and performa...

  16. Speech and non-speech processing in children with phonological disorders: an electrophysiological study

    Directory of Open Access Journals (Sweden)

    Isabela Crivellaro Gonçalves

    2011-01-01

    Full Text Available OBJECTIVE: To determine whether neurophysiological auditory brainstem responses to clicks and repeated speech stimuli differ between typically developing children and children with phonological disorders. INTRODUCTION: Phonological disorders are language impairments resulting from inadequate use of adult phonological language rules and are among the most common speech and language disorders in children (prevalence: 8 - 9%. Our hypothesis is that children with phonological disorders have basic differences in the way that their brains encode acoustic signals at brainstem level when compared to normal counterparts. METHODS: We recorded click and speech evoked auditory brainstem responses in 18 typically developing children (control group and in 18 children who were clinically diagnosed with phonological disorders (research group. The age range of the children was from 7-11 years. RESULTS: The research group exhibited significantly longer latency responses to click stimuli (waves I, III and V and speech stimuli (waves V and A when compared to the control group. DISCUSSION: These results suggest that the abnormal encoding of speech sounds may be a biological marker of phonological disorders. However, these results cannot define the biological origins of phonological problems. We also observed that speech-evoked auditory brainstem responses had a higher specificity/sensitivity for identifying phonological disorders than click-evoked auditory brainstem responses. CONCLUSIONS: Early stages of the auditory pathway processing of an acoustic stimulus are not similar in typically developing children and those with phonological disorders. These findings suggest that there are brainstem auditory pathway abnormalities in children with phonological disorders.

  17. Conflict monitoring in speech processing : An fMRI study of error detection in speech production and perception

    NARCIS (Netherlands)

    Gauvin, Hanna; De Baene, W.; Brass, Marcel; Hartsuiker, Robert

    2016-01-01

    To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated

  18. Religious Speech in the Military: Freedoms and Limitations

    Science.gov (United States)

    2011-01-01

    abridging the freedom of speech .” Speech is construed broadly and includes both oral and written speech, as well as expressive conduct and displays when...intended to convey a message that is likely to be understood.7 Religious speech is certainly included. As a bedrock constitutional right, freedom of speech has...to good order and discipline or of a nature to bring discredit upon the armed forces)—the First Amendment’s freedom of speech will not provide them

  19. Perceived Speech Quality Estimation Using DTW Algorithm

    Directory of Open Access Journals (Sweden)

    S. Arsenovski

    2009-06-01

    Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.

  20. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  1. Detection of target phonemes in spontaneous and read speech.

    Science.gov (United States)

    Mehta, G; Cutler, A

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalise to the recognition of spontaneous speech. In the present study listeners were presented with both spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Responses were, overall, equally fast in each speech mode. However, analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than in unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claims from previous work that listeners pay great attention to prosodic information in the process of recognising speech.

  2. Voice Activity Detection. Fundamentals and Speech Recognition System Robustness

    OpenAIRE

    Ramirez, J.; Gorriz, J. M.; Segura, J. C.

    2007-01-01

    This chapter has shown an overview of the main challenges in robust speech detection and a review of the state of the art and applications. VADs are frequently used in a number of applications including speech coding, speech enhancement and speech recognition. A precise VAD extracts a set of discriminative speech features from the noisy speech and formulates the decision in terms of well defined rule. The chapter has summarized three robust VAD methods that yield high speech/non-speech discri...

  3. Religion, hate speech, and non-domination

    OpenAIRE

    Bonotti, Matteo

    2017-01-01

    In this paper I argue that one way of explaining what is wrong with hate speech is by critically assessing what kind of freedom free speech involves and, relatedly, what kind of freedom hate speech undermines. More specifically, I argue that the main arguments for freedom of speech (e.g. from truth, from autonomy, and from democracy) rely on a “positive” conception of freedom intended as autonomy and self-mastery (Berlin, 2006), and can only partially help us to understand what is wrong with ...

  4. Modelling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    Jørgensen and Dau (J Acoust Soc Am 130:1475-1487, 2011) proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII) in conditions with nonlinearly processed speech...... subjected to phase jitter, a condition in which the spectral structure of the intelligibility of speech signal is strongly affected, while the broadband temporal envelope is kept largely intact. In contrast, the effects of this distortion can be predicted -successfully by the spectro-temporal modulation...... suggest that the SNRenv might reflect a powerful decision metric, while some explicit across-frequency analysis seems crucial in some conditions. How such across-frequency analysis is "realized" in the auditory system remains unresolved....

  5. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  6. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  7. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    Science.gov (United States)

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  8. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  9. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  10. Regulation of speech in multicultural societies

    NARCIS (Netherlands)

    Maussen, M.; Grillo, R.

    2015-01-01

    This book focuses on the way in which public debate and legal practice intersect when it comes to the value of free speech and the need to regulate "offensive", "blasphemous" or "hate" speech, especially, though not exclusively where such speech is thought to be offensive to members of ethnic and

  11. ACOUSTIC SPEECH RECOGNITION FOR MARATHI LANGUAGE USING SPHINX

    Directory of Open Access Journals (Sweden)

    Aman Ankit

    2016-09-01

    Full Text Available Speech recognition or speech to text processing, is a process of recognizing human speech by the computer and converting into text. In speech recognition, transcripts are created by taking recordings of speech as audio and their text transcriptions. Speech based applications which include Natural Language Processing (NLP techniques are popular and an active area of research. Input to such applications is in natural language and output is obtained in natural language. Speech recognition mostly revolves around three approaches namely Acoustic phonetic approach, Pattern recognition approach and Artificial intelligence approach. Creation of acoustic model requires a large database of speech and training algorithms. The output of an ASR system is recognition and translation of spoken language into text by computers and computerized devices. ASR today finds enormous application in tasks that require human machine interfaces like, voice dialing, and etc. Our key contribution in this paper is to create corpora for Marathi language and explore the use of Sphinx engine for automatic speech recognition

  12. Is Birdsong More Like Speech or Music?

    Science.gov (United States)

    Shannon, Robert V

    2016-04-01

    Music and speech share many acoustic cues but not all are equally important. For example, harmonic pitch is essential for music but not for speech. When birds communicate is their song more like speech or music? A new study contrasting pitch and spectral patterns shows that birds perceive their song more like humans perceive speech. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Effect of "developmental speech and language training through music" on speech production in children with autism spectrum disorders.

    Science.gov (United States)

    Lim, Hayoung A

    2010-01-01

    The study compared the effect of music training, speech training and no-training on the verbal production of children with Autism Spectrum Disorders (ASD). Participants were 50 children with ASD, age range 3 to 5 years, who had previously been evaluated on standard tests of language and level of functioning. They were randomly assigned to one of three 3-day conditions. Participants in music training (n = 18) watched a music video containing 6 songs and pictures of the 36 target words; those in speech training (n = 18) watched a speech video containing 6 stories and pictures, and those in the control condition (n = 14) received no treatment. Participants' verbal production including semantics, phonology, pragmatics, and prosody was measured by an experimenter designed verbal production evaluation scale. Results showed that participants in both music and speech training significantly increased their pre to posttest verbal production. Results also indicated that both high and low functioning participants improved their speech production after receiving either music or speech training; however, low functioning participants showed a greater improvement after the music training than the speech training. Children with ASD perceive important linguistic information embedded in music stimuli organized by principles of pattern perception, and produce the functional speech.

  14. Occult carbon monoxide poisoning.

    Science.gov (United States)

    Kirkpatrick, J N

    1987-01-01

    A syndrome of headache, fatigue, dizziness, paresthesias, chest pain, palpitations and visual disturbances was associated with chronic occult carbon monoxide exposure in 26 patients in a primary care setting. A causal association was supported by finding a source of carbon monoxide in a patient's home, workplace or vehicle; results of screening tests that ruled out other illnesses; an abnormally high carboxyhemoglobin level in 11 of 14 patients tested, and abatement or resolution of symptoms when the source of carbon monoxide was removed. Exposed household pets provided an important clue to the diagnosis in some cases. Recurrent occult carbon monoxide poisoning may be a frequently overlooked cause of persistent or recurrent headache, fatigue, dizziness, paresthesias, abdominal pain, diarrhea and unusual spells.

  15. Speech networks at rest and in action: interactions between functional brain networks controlling speech production

    Science.gov (United States)

    Fuertinger, Stefan

    2015-01-01

    Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. PMID:25673742

  16. Speech Synthesis Applied to Language Teaching.

    Science.gov (United States)

    Sherwood, Bruce

    1981-01-01

    The experimental addition of speech output to computer-based Esperanto lessons using speech synthesized from text is described. Because of Esperanto's phonetic spelling and simple rhythm, it is particularly easy to describe the mechanisms of Esperanto synthesis. Attention is directed to how the text-to-speech conversion is performed and the ways…

  17. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    Full Text Available In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively

  18. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  19. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  20. Digitized Ethnic Hate Speech: Understanding Effects of Digital Media Hate Speech on Citizen Journalism in Kenya

    Directory of Open Access Journals (Sweden)

    Stephen Gichuhi Kimotho

    2016-06-01

    Full Text Available Ethnicity in Kenya permeates all spheres of life. However, it is in politics that ethnicity is most visible. Election time in Kenya often leads to ethnic competition and hatred, often expressed through various media. Ethnic hate speech characterized the 2007 general elections in party rallies and through text messages, emails, posters and leaflets. This resulted in widespread skirmishes that left over 1200 people dead, and many displaced (KNHRC, 2008. In 2013, however, the new battle zone was the war of words on social media platform. More than any other time in Kenyan history, Kenyans poured vitriolic ethnic hate speech through digital media like Facebook, tweeter and blogs. Although scholars have studied the role and effects of the mainstream media like television and radio in proliferating the ethnic hate speech in Kenya (Michael Chege, 2008; Goldstein & Rotich, 2008a; Ismail & Deane, 2008; Jacqueline Klopp & Prisca Kamungi, 2007, little has been done in regard to social media.  This paper investigated the nature of digitized hate speech by: describing the forms of ethnic hate speech on social media in Kenya; the effects of ethnic hate speech on Kenyan’s perception of ethnic entities; ethnic conflict and ethics of citizen journalism. This study adopted a descriptive interpretive design, and utilized Austin’s Speech Act Theory, which explains use of language to achieve desired purposes and direct behaviour (Tarhom & Miracle, 2013. Content published between January and April 2013 from six purposefully identified blogs was analysed. Questionnaires were used to collect data from university students as they form a good sample of Kenyan population, are most active on social media and are drawn from all parts of the country. Qualitative data were analysed using NVIVO 10 software, while responses from the questionnaire were analysed using IBM SPSS version 21. The findings indicated that Facebook and Twitter were the main platforms used to

  1. Speech and nonspeech: What are we talking about?

    Science.gov (United States)

    Maas, Edwin

    2017-08-01

    Understanding of the behavioural, cognitive and neural underpinnings of speech production is of interest theoretically, and is important for understanding disorders of speech production and how to assess and treat such disorders in the clinic. This paper addresses two claims about the neuromotor control of speech production: (1) speech is subserved by a distinct, specialised motor control system and (2) speech is holistic and cannot be decomposed into smaller primitives. Both claims have gained traction in recent literature, and are central to a task-dependent model of speech motor control. The purpose of this paper is to stimulate thinking about speech production, its disorders and the clinical implications of these claims. The paper poses several conceptual and empirical challenges for these claims - including the critical importance of defining speech. The emerging conclusion is that a task-dependent model is called into question as its two central claims are founded on ill-defined and inconsistently applied concepts. The paper concludes with discussion of methodological and clinical implications, including the potential utility of diadochokinetic (DDK) tasks in assessment of motor speech disorders and the contraindication of nonspeech oral motor exercises to improve speech function.

  2. Noise-robust speech triage.

    Science.gov (United States)

    Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav

    2018-04-01

    A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).

  3. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  4. Speech and Debate as Civic Education

    Science.gov (United States)

    Hogan, J. Michael; Kurr, Jeffrey A.; Johnson, Jeremy D.; Bergmaier, Michael J.

    2016-01-01

    In light of the U.S. Senate's designation of March 15, 2016 as "National Speech and Debate Education Day" (S. Res. 398, 2016), it only seems fitting that "Communication Education" devote a special section to the role of speech and debate in civic education. Speech and debate have been at the heart of the communication…

  5. Tuning Neural Phase Entrainment to Speech.

    Science.gov (United States)

    Falk, Simone; Lanzilotti, Cosima; Schön, Daniele

    2017-08-01

    Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.

  6. Speech perception as an active cognitive process

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-03-01

    Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or

  7. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  8. The motor theory of speech perception revisited.

    Science.gov (United States)

    Massaro, Dominic W; Chen, Trevor H

    2008-04-01

    Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.

  9. Commercial speech in crisis: Crisis Pregnancy Center regulations and definitions of commercial speech.

    Science.gov (United States)

    Gilbert, Kathryn E

    2013-02-01

    Recent attempts to regulate Crisis Pregnancy Centers, pseudoclinics that surreptitiously aim to dissuade pregnant women from choosing abortion, have confronted the thorny problem of how to define commercial speech. The Supreme Court has offered three potential answers to this definitional quandary. This Note uses the Crisis Pregnancy Center cases to demonstrate that courts should use one of these solutions, the factor-based approach of Bolger v. Youngs Drugs Products Corp., to define commercial speech in the Crisis Pregnancy Center cases and elsewhere. In principle and in application, the Bolger factor-based approach succeeds in structuring commercial speech analysis at the margins of the doctrine.

  10. Importância do fonoaudiólogo no acompanhamento de indivíduos com hipotireoidismo congênito Speech and language pathologist importance in the attendance of individuals with congenital hypothyroidism

    Directory of Open Access Journals (Sweden)

    Mariana Germano Gejão

    2008-01-01

    the children thataccomplish the treatment can present development disturbances. The National Neonatal Screening Program, instituted by Health Ministry, foresees the individuals' longitudinal attendance with multidisciplinary team. However, the Speech and Language Pathologist is not included in this team. This way, considering the occurrence of communication disturbances in these individuals, a bibliographical assessment was carried out in Lilacs, MedLine and PubMed databases, in the period from 1987 to 2007, regarding the disturbances in development abilities arising from congenital hypothyroidism. PURPOSE: to check, in the scientific literature, for the presence of development alterations in individuals with congenital hypothyroidism and contemplate the importance of the speech and language pathologist performance, together with a multidisciplinary team specialized in their attendance. CONCLUSION: the literature reports disturbances in development abilities (motor, cognitive, linguistics and self-care and stresses out that children with congenital hypothyroidism are under risk for disturbances in the linguistic development and, therefore, they need the longitudinal attendance of the communicative development. The importance of the speech and language pathologist performance in credential Neonatal Screening Programs by the Health Ministry becomes evident. It still fits to emphasize the need for investigations regarding the other metabolic alterations meditated in these programs, in which the speech and language pathologist can act in such a way to prevent, enable and rehabilitate the communication disturbances, contributing to the work in team, promoting health in this population.

  11. Neurophysiological influence of musical training on speech perception.

    Science.gov (United States)

    Shahin, Antoine J

    2011-01-01

    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL.

  12. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  13. LIBERDADE DE EXPRESSÃO E DISCURSO DO ÓDIO NO BRASIL / FREE SPEECH AND HATE SPEECH IN BRAZIL

    Directory of Open Access Journals (Sweden)

    Nevita Maria Pessoa de Aquino Franca Luna

    2014-12-01

    Full Text Available The purpose of this article is to analyze the restriction of free speech when it comes close to hate speech. In this perspective, the aim of this study is to answer the question: what is the understanding adopted by the Brazilian Supreme Court in cases involving the conflict between free speech and hate speech? The methodology combines a bibliographic review on the theoretical assumptions of the research (concept of free speech and hate speech, and understanding of the rights of defense of traditionally discriminated minorities and empirical research (documental and jurisprudential analysis of judged cases of American Court, German Court and Brazilian Court. Firstly, free speech is discussed, defining its meaning, content and purpose. Then, the hate speech is pointed as an inhibitor element of free speech for offending members of traditionally discriminated minorities, who are outnumbered or in a situation of cultural, socioeconomic or political subordination. Subsequently, are discussed some aspects of American (negative freedom and German models (positive freedom, to demonstrate that different cultures adopt different legal solutions. At the end, it is concluded that there is an approximation of the Brazilian understanding with the German doctrine, from the analysis of landmark cases as the publisher Siegfried Ellwanger (2003 and the Samba School Unidos do Viradouro (2008. The Brazilian comprehension, a multicultural country made up of different ethnicities, leads to a new process of defending minorities who, despite of involving the collision of fundamental rights (dignity, equality and freedom, is still restrained by incompatible barriers of a contemporary pluralistic democracy.

  14. Speech production in amplitude-modulated noise

    DEFF Research Database (Denmark)

    Macdonald, Ewen N; Raufer, Stefan

    2013-01-01

    The Lombard effect refers to the phenomenon where talkers automatically increase their level of speech in a noisy environment. While many studies have characterized how the Lombard effect influences different measures of speech production (e.g., F0, spectral tilt, etc.), few have investigated...... the consequences of temporally fluctuating noise. In the present study, 20 talkers produced speech in a variety of noise conditions, including both steady-state and amplitude-modulated white noise. While listening to noise over headphones, talkers produced randomly generated five word sentences. Similar...... of noisy environments and will alter their speech accordingly....

  15. Free Speech Yearbook 1980.

    Science.gov (United States)

    Kane, Peter E., Ed.

    The 11 articles in this collection deal with theoretical and practical freedom of speech issues. The topics covered are (1) the United States Supreme Court and communication theory; (2) truth, knowledge, and a democratic respect for diversity; (3) denial of freedom of speech in Jock Yablonski's campaign for the presidency of the United Mine…

  16. Facial Speech Gestures: The Relation between Visual Speech Processing, Phonological Awareness, and Developmental Dyslexia in 10-Year-Olds

    Science.gov (United States)

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.

    2016-01-01

    Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…

  17. Speech enhancement on smartphone voice recording

    International Nuclear Information System (INIS)

    Atmaja, Bagus Tris; Farid, Mifta Nur; Arifianto, Dhany

    2016-01-01

    Speech enhancement is challenging task in audio signal processing to enhance the quality of targeted speech signal while suppress other noises. In the beginning, the speech enhancement algorithm growth rapidly from spectral subtraction, Wiener filtering, spectral amplitude MMSE estimator to Non-negative Matrix Factorization (NMF). Smartphone as revolutionary device now is being used in all aspect of life including journalism; personally and professionally. Although many smartphones have two microphones (main and rear) the only main microphone is widely used for voice recording. This is why the NMF algorithm widely used for this purpose of speech enhancement. This paper evaluate speech enhancement on smartphone voice recording by using some algorithms mentioned previously. We also extend the NMF algorithm to Kulback-Leibler NMF with supervised separation. The last algorithm shows improved result compared to others by spectrogram and PESQ score evaluation. (paper)

  18. Hearing speech in music.

    Science.gov (United States)

    Ekström, Seth-Reino; Borg, Erik

    2011-01-01

    The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (Ptempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (Pmusic offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  19. Speech networks at rest and in action: interactions between functional brain networks controlling speech production.

    Science.gov (United States)

    Simonyan, Kristina; Fuertinger, Stefan

    2015-04-01

    Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. Copyright © 2015 the American Physiological Society.

  20. Segmental intelligibility of synthetic speech produced by rule.

    Science.gov (United States)

    Logan, J S; Greene, B G; Pisoni, D B

    1989-08-01

    This paper reports the results of an investigation that employed the modified rhyme test (MRT) to measure the segmental intelligibility of synthetic speech generated automatically by rule. Synthetic speech produced by ten text-to-speech systems was studied and compared to natural speech. A variation of the standard MRT was also used to study the effects of response set size on perceptual confusions. Results indicated that the segmental intelligibility scores formed a continuum. Several systems displayed very high levels of performance that were close to or equal to scores obtained with natural speech; other systems displayed substantially worse performance compared to natural speech. The overall performance of the best system, DECtalk--Paul, was equivalent to the data obtained with natural speech for consonants in syllable-initial position. The findings from this study are discussed in terms of the use of a set of standardized procedures for measuring intelligibility of synthetic speech under controlled laboratory conditions. Recent work investigating the perception of synthetic speech under more severe conditions in which greater demands are made on the listener's processing resources is also considered. The wide range of intelligibility scores obtained in the present study demonstrates important differences in perception and suggests that not all synthetic speech is perceptually equivalent to the listener.

  1. Segmental intelligibility of synthetic speech produced by rule

    Science.gov (United States)

    Logan, John S.; Greene, Beth G.; Pisoni, David B.

    2012-01-01

    This paper reports the results of an investigation that employed the modified rhyme test (MRT) to measure the segmental intelligibility of synthetic speech generated automatically by rule. Synthetic speech produced by ten text-to-speech systems was studied and compared to natural speech. A variation of the standard MRT was also used to study the effects of response set size on perceptual confusions. Results indicated that the segmental intelligibility scores formed a continuum. Several systems displayed very high levels of performance that were close to or equal to scores obtained with natural speech; other systems displayed substantially worse performance compared to natural speech. The overall performance of the best system, DECtalk—Paul, was equivalent to the data obtained with natural speech for consonants in syllable-initial position. The findings from this study are discussed in terms of the use of a set of standardized procedures for measuring intelligibility of synthetic speech under controlled laboratory conditions. Recent work investigating the perception of synthetic speech under more severe conditions in which greater demands are made on the listener’s processing resources is also considered. The wide range of intelligibility scores obtained in the present study demonstrates important differences in perception and suggests that not all synthetic speech is perceptually equivalent to the listener. PMID:2527884

  2. Empathy, Ways of Knowing, and Interdependence as Mediators of Gender Differences in Attitudes toward Hate Speech and Freedom of Speech

    Science.gov (United States)

    Cowan, Gloria; Khatchadourian, Desiree

    2003-01-01

    Women are more intolerant of hate speech than men. This study examined relationality measures as mediators of gender differences in the perception of the harm of hate speech and the importance of freedom of speech. Participants were 107 male and 123 female college students. Questionnaires assessed the perceived harm of hate speech, the importance…

  3. Speech enhancement theory and practice

    CERN Document Server

    Loizou, Philipos C

    2013-01-01

    With the proliferation of mobile devices and hearing devices, including hearing aids and cochlear implants, there is a growing and pressing need to design algorithms that can improve speech intelligibility without sacrificing quality. Responding to this need, Speech Enhancement: Theory and Practice, Second Edition introduces readers to the basic problems of speech enhancement and the various algorithms proposed to solve these problems. Updated and expanded, this second edition of the bestselling textbook broadens its scope to include evaluation measures and enhancement algorithms aimed at impr

  4. Recognizing emotional speech in Persian: a validated database of Persian emotional speech (Persian ESD).

    Science.gov (United States)

    Keshtiari, Niloofar; Kuhlmann, Michael; Eslami, Moharram; Klann-Delius, Gisela

    2015-03-01

    Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 %) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author.

  5. Imitation and speech: commonalities within Broca's area.

    Science.gov (United States)

    Kühn, Simone; Brass, Marcel; Gallinat, Jürgen

    2013-11-01

    The so-called embodiment of communication has attracted considerable interest. Recently a growing number of studies have proposed a link between Broca's area's involvement in action processing and its involvement in speech. The present quantitative meta-analysis set out to test whether neuroimaging studies on imitation and overt speech show overlap within inferior frontal gyrus. By means of activation likelihood estimation (ALE), we investigated concurrence of brain regions activated by object-free hand imitation studies as well as overt speech studies including simple syllable and more complex word production. We found direct overlap between imitation and speech in bilateral pars opercularis (BA 44) within Broca's area. Subtraction analyses revealed no unique localization neither for speech nor for imitation. To verify the potential of ALE subtraction analysis to detect unique involvement within Broca's area, we contrasted the results of a meta-analysis on motor inhibition and imitation and found separable regions involved for imitation. This is the first meta-analysis to compare the neural correlates of imitation and overt speech. The results are in line with the proposed evolutionary roots of speech in imitation.

  6. Design and realisation of an audiovisual speech activity detector

    NARCIS (Netherlands)

    Van Bree, K.C.

    2006-01-01

    For many speech telecommunication technologies a robust speech activity detector is important. An audio-only speech detector will givefalse positives when the interfering signal is speech or has speech characteristics. The modality video is suitable to solve this problem. In this report the approach

  7. Utility of TMS to understand the neurobiology of speech

    Directory of Open Access Journals (Sweden)

    Takenobu eMurakami

    2013-07-01

    Full Text Available According to a traditional view, speech perception and production are processed largely separately in sensory and motor brain areas. Recent psycholinguistic and neuroimaging studies provide novel evidence that the sensory and motor systems dynamically interact in speech processing, by demonstrating that speech perception and imitation share regional brain activations. However, the exact nature and mechanisms of these sensorimotor interactions are not completely understood yet.Transcranial magnetic stimulation (TMS has often been used in the cognitive neurosciences, including speech research, as a complementary technique to behavioral and neuroimaging studies. Here we provide an up-to-date review focusing on TMS studies that explored speech perception and imitation.Single-pulse TMS of the primary motor cortex (M1 demonstrated a speech specific and somatotopically specific increase of excitability of the M1 lip area during speech perception (listening to speech or lip reading. A paired-coil TMS approach showed increases in effective connectivity from brain regions that are involved in speech processing to the M1 lip area when listening to speech. TMS in virtual lesion mode applied to speech processing areas modulated performance of phonological recognition and imitation of perceived speech.In summary, TMS is an innovative tool to investigate processing of speech perception and imitation. TMS studies have provided strong evidence that the sensory system is critically involved in mapping sensory input onto motor output and that the motor system plays an important role in speech perception.

  8. LinguaTag: an Emotional Speech Analysis Application

    OpenAIRE

    Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros

    2008-01-01

    The analysis of speech, particularly for emotional content, is an open area of current research. Ongoing work has developed an emotional speech corpus for analysis, and defined a vowel stress method by which this analysis may be performed. This paper documents the development of LinguaTag, an open source speech analysis software application which implements this vowel stress emotional speech analysis method developed as part of research into the acoustic and linguistic correlates of emotional...

  9. Correlational Analysis of Speech Intelligibility Tests and Metrics for Speech Transmission

    Science.gov (United States)

    2017-12-04

    sounds, are more prone to masking than the high-energy, wide-spectrum vowels. Such contaminated speech is still audible but not clear. Thus, speech...Science; 2012 June 12–14; Kuala Lumpur ( Malaysia ): New York (NY): IEEE; c2012. p. 676–682. Approved for public release; distribution is unlimited. 47...ARRABITO 1 UNIV OF COLORADO (PDF) K AREHART 1 NASA (PDF) J ALLEN 1 FOOD AND DRUG ADM-DEPT (PDF) OF HEALTH AND HUMAN SERVICES

  10. Impairments of speech fluency in Lewy body spectrum disorder.

    Science.gov (United States)

    Ash, Sharon; McMillan, Corey; Gross, Rachel G; Cook, Philip; Gunawardena, Delani; Morgan, Brianna; Boller, Ashley; Siderowf, Andrew; Grossman, Murray

    2012-03-01

    Few studies have examined connected speech in demented and non-demented patients with Parkinson's disease (PD). We assessed the speech production of 35 patients with Lewy body spectrum disorder (LBSD), including non-demented PD patients, patients with PD dementia (PDD), and patients with dementia with Lewy bodies (DLB), in a semi-structured narrative speech sample in order to characterize impairments of speech fluency and to determine the factors contributing to reduced speech fluency in these patients. Both demented and non-demented PD patients exhibited reduced speech fluency, characterized by reduced overall speech rate and long pauses between sentences. Reduced speech rate in LBSD correlated with measures of between-utterance pauses, executive functioning, and grammatical comprehension. Regression analyses related non-fluent speech, grammatical difficulty, and executive difficulty to atrophy in frontal brain regions. These findings indicate that multiple factors contribute to slowed speech in LBSD, and this is mediated in part by disease in frontal brain regions. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Disturbance observer-based L1 robust tracking control for hypersonic vehicles with T-S disturbance modeling

    Directory of Open Access Journals (Sweden)

    Yang Yi

    2016-11-01

    Full Text Available This article concerns a disturbance observer-based L1 robust anti-disturbance tracking algorithm for the longitudinal models of hypersonic flight vehicles with different kinds of unknown disturbances. On one hand, by applying T-S fuzzy models to represent those modeled disturbances, a disturbance observer relying on T-S disturbance models can be constructed to track the dynamics of exogenous disturbances. On the other hand, L1 index is introduced to analyze the attenuation performance of disturbance for those unmodeled disturbances. By utilizing the existing convex optimization algorithm, a disturbance observer-based proportional-integral-controlled input is proposed such that the stability of hypersonic flight vehicles can be ensured and the tracking error for velocity and altitude in hypersonic flight vehicle models can converge to equilibrium point. Furthermore, the satisfactory disturbance rejection and attenuation with L1 index can be obtained simultaneously. Simulation results on hypersonic flight vehicle models can reflect the feasibility and effectiveness of the proposed control algorithm.

  12. Cognitive functions in Childhood Apraxia of Speech

    NARCIS (Netherlands)

    Nijland, L.; Terband, H.; Maassen, B.

    2015-01-01

    Purpose: Childhood Apraxia of Speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional

  13. Subjective Quality Measurement of Speech Its Evaluation, Estimation and Applications

    CERN Document Server

    Kondo, Kazuhiro

    2012-01-01

    It is becoming crucial to accurately estimate and monitor speech quality in various ambient environments to guarantee high quality speech communication. This practical hands-on book shows speech intelligibility measurement methods so that the readers can start measuring or estimating speech intelligibility of their own system. The book also introduces subjective and objective speech quality measures, and describes in detail speech intelligibility measurement methods. It introduces a diagnostic rhyme test which uses rhyming word-pairs, and includes: An investigation into the effect of word familiarity on speech intelligibility. Speech intelligibility measurement of localized speech in virtual 3-D acoustic space using the rhyme test. Estimation of speech intelligibility using objective measures, including the ITU standard PESQ measures, and automatic speech recognizers.

  14. Comparison of two speech privacy measurements, articulation index (AI) and speech privacy noise isolation class (NIC'), in open workplaces

    Science.gov (United States)

    Yoon, Heakyung C.; Loftness, Vivian

    2002-05-01

    Lack of speech privacy has been reported to be the main dissatisfaction among occupants in open workplaces, according to workplace surveys. Two speech privacy measurements, Articulation Index (AI), standardized by the American National Standards Institute in 1969, and Speech Privacy Noise Isolation Class (NIC', Noise Isolation Class Prime), adapted from Noise Isolation Class (NIC) by U. S. General Services Administration (GSA) in 1979, have been claimed as objective tools to measure speech privacy in open offices. To evaluate which of them, normal privacy for AI or satisfied privacy for NIC', is a better tool in terms of speech privacy in a dynamic open office environment, measurements were taken in the field. AIs and NIC's in the different partition heights and workplace configurations have been measured following ASTM E1130 (Standard Test Method for Objective Measurement of Speech Privacy in Open Offices Using Articulation Index) and GSA test PBS-C.1 (Method for the Direct Measurement of Speech-Privacy Potential (SPP) Based on Subjective Judgments) and PBS-C.2 (Public Building Service Standard Method of Test Method for the Sufficient Verification of Speech-Privacy Potential (SPP) Based on Objective Measurements Including Methods for the Rating of Functional Interzone Attenuation and NC-Background), respectively.

  15. SUSTAINABILITY IN THE BOWELS OF SPEECHES

    Directory of Open Access Journals (Sweden)

    Jadir Mauro Galvao

    2012-10-01

    Full Text Available The theme of sustainability has not yet achieved the feat of make up as an integral part the theoretical medley that brings out our most everyday actions, often visits some of our thoughts and permeates many of our speeches. The big event of 2012, the meeting gathered Rio +20 glances from all corners of the planet around that theme as burning, but we still see forward timidly. Although we have no very clear what the term sustainability closes it does not sound quite strange. Associate with things like ecology, planet, wastes emitted by smokestacks of factories, deforestation, recycling and global warming must be related, but our goal in this article is the least of clarifying the term conceptually and more try to observe as it appears in speeches of such conference. When the competent authorities talk about sustainability relate to what? We intend to investigate the lines and between the lines of these speeches, any assumptions associated with the term. Therefore we will analyze the speech of the People´s Summit, the opening speech of President Dilma and emblematic speech of the President of Uruguay, José Pepe Mujica.

  16. Modeling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Dau, Torsten

    2012-01-01

    ) in conditions with nonlinearly processed speech. Instead of considering the reduction of the temporal modulation energy as the intelligibility metric, as assumed in the STI, the sEPSM applies the signal-to-noise ratio in the envelope domain (SNRenv). This metric was shown to be the key for predicting...... understanding speech when more than one person is talking, even when reduced audibility has been fully compensated for by a hearing aid. The reasons for these difficulties are not well understood. This presentation highlights recent concepts of the monaural and binaural signal processing strategies employed...... by the normal as well as impaired auditory system. Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII...

  17. Parent-child interaction in motor speech therapy.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Jethava, Vibhuti; Pukonen, Margit; Huynh, Anna; Goshulak, Debra; Kroll, Robert; van Lieshout, Pascal

    2018-01-01

    This study measures the reliability and sensitivity of a modified Parent-Child Interaction Observation scale (PCIOs) used to monitor the quality of parent-child interaction. The scale is part of a home-training program employed with direct motor speech intervention for children with speech sound disorders. Eighty-four preschool age children with speech sound disorders were provided either high- (2×/week/10 weeks) or low-intensity (1×/week/10 weeks) motor speech intervention. Clinicians completed the PCIOs at the beginning, middle, and end of treatment. Inter-rater reliability (Kappa scores) was determined by an independent speech-language pathologist who assessed videotaped sessions at the midpoint of the treatment block. Intervention sensitivity of the scale was evaluated using a Friedman test for each item and then followed up with Wilcoxon pairwise comparisons where appropriate. We obtained fair-to-good inter-rater reliability (Kappa = 0.33-0.64) for the PCIOs using only video-based scoring. Child-related items were more strongly influenced by differences in treatment intensity than parent-related items, where a greater number of sessions positively influenced parent learning of treatment skills and child behaviors. The adapted PCIOs is reliable and sensitive to monitor the quality of parent-child interactions in a 10-week block of motor speech intervention with adjunct home therapy. Implications for rehabilitation Parent-centered therapy is considered a cost effective method of speech and language service delivery. However, parent-centered models may be difficult to implement for treatments such as developmental motor speech interventions that require a high degree of skill and training. For children with speech sound disorders and motor speech difficulties, a translated and adapted version of the parent-child observation scale was found to be sufficiently reliable and sensitive to assess changes in the quality of the parent-child interactions during

  18. Speech-enabled Computer-aided Translation

    DEFF Research Database (Denmark)

    Mesa-Lao, Bartolomé

    2014-01-01

    The present study has surveyed post-editor trainees’ views and attitudes before and after the introduction of speech technology as a front end to a computer-aided translation workbench. The aim of the survey was (i) to identify attitudes and perceptions among post-editor trainees before performing...... a post-editing task using automatic speech recognition (ASR); and (ii) to assess the degree to which post-editors’ attitudes and expectations to the use of speech technology changed after actually using it. The survey was based on two questionnaires: the first one administered before the participants...

  19. Comment on "Monkey vocal tracts are speech-ready".

    Science.gov (United States)

    Lieberman, Philip

    2017-07-01

    Monkey vocal tracts are capable of producing monkey speech, not the full range of articulate human speech. The evolution of human speech entailed both anatomy and brains. Fitch, de Boer, Mathur, and Ghazanfar in Science Advances claim that "monkey vocal tracts are speech-ready," and conclude that "…the evolution of human speech capabilities required neural change rather than modifications of vocal anatomy." Neither premise is consistent either with the data presented and the conclusions reached by de Boer and Fitch themselves in their own published papers on the role of anatomy in the evolution of human speech or with the body of independent studies published since the 1950s.

  20. Ultra low bit-rate speech coding

    CERN Document Server

    Ramasubramanian, V

    2015-01-01

    "Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.

  1. The Effectiveness of Clear Speech as a Masker

    Science.gov (United States)

    Calandruccio, Lauren; Van Engen, Kristin; Dhar, Sumitrajit; Bradlow, Ann R.

    2010-01-01

    Purpose: It is established that speaking clearly is an effective means of enhancing intelligibility. Because any signal-processing scheme modeled after known acoustic-phonetic features of clear speech will likely affect both target and competing speech, it is important to understand how speech recognition is affected when a competing speech signal…

  2. Speech Motor Development in Childhood Apraxia of Speech : Generating Testable Hypotheses by Neurocomputational Modeling

    NARCIS (Netherlands)

    Terband, H.; Maassen, B.

    2010-01-01

    Childhood apraxia of speech (CAS) is a highly controversial clinical entity, with respect to both clinical signs and underlying neuromotor deficit. In the current paper, we advocate a modeling approach in which a computational neural model of speech acquisition and production is utilized in order to

  3. Speech motor development in childhood apraxia of speech: generating testable hypotheses by neurocomputational modeling.

    NARCIS (Netherlands)

    Terband, H.R.; Maassen, B.A.M.

    2010-01-01

    Childhood apraxia of speech (CAS) is a highly controversial clinical entity, with respect to both clinical signs and underlying neuromotor deficit. In the current paper, we advocate a modeling approach in which a computational neural model of speech acquisition and production is utilized in order to

  4. Between-Word Simplification Patterns in the Continuous Speech of Children with Speech Sound Disorders

    Science.gov (United States)

    Klein, Harriet B.; Liu-Shea, May

    2009-01-01

    Purpose: This study was designed to identify and describe between-word simplification patterns in the continuous speech of children with speech sound disorders. It was hypothesized that word combinations would reveal phonological changes that were unobserved with single words, possibly accounting for discrepancies between the intelligibility of…

  5. Effects of Synthetic Speech Output on Requesting and Natural Speech Production in Children with Autism: A Preliminary Study

    Science.gov (United States)

    Schlosser, Ralf W.; Sigafoos, Jeff; Luiselli, James K.; Angermeier, Katie; Harasymowyz, Ulana; Schooley, Katherine; Belfiore, Phil J.

    2007-01-01

    Requesting is often taught as an initial target during augmentative and alternative communication intervention in children with autism. Speech-generating devices are purported to have advantages over non-electronic systems due to their synthetic speech output. On the other hand, it has been argued that speech output, being in the auditory…

  6. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status: an experimental parametric study.

    Science.gov (United States)

    Akhoun, Idrick; Moulin, Annie; Jeanvoine, Arnaud; Ménard, Mikael; Buret, François; Vollaire, Christian; Scorretti, Riccardo; Veuillet, Evelyne; Berger-Vachon, Christian; Collet, Lionel; Thai-Van, Hung

    2008-11-15

    Speech elicited auditory brainstem responses (Speech ABR) have been shown to be an objective measurement of speech processing in the brainstem. Given the simultaneous stimulation and recording, and the similarities between the recording and the speech stimulus envelope, there is a great risk of artefactual recordings. This study sought to systematically investigate the source of artefactual contamination in Speech ABR response. In a first part, we measured the sound level thresholds over which artefactual responses were obtained, for different types of transducers and experimental setup parameters. A watermelon model was used to model the human head susceptibility to electromagnetic artefact. It was found that impedances between the electrodes had a great effect on electromagnetic susceptibility and that the most prominent artefact is due to the transducer's electromagnetic leakage. The only artefact-free condition was obtained with insert-earphones shielded in a Faraday cage linked to common ground. In a second part of the study, using the previously defined artefact-free condition, we recorded speech ABR in unilateral deaf subjects and bilateral normal hearing subjects. In an additional control condition, Speech ABR was recorded with the insert-earphones used to deliver the stimulation, unplugged from the ears, so that the subjects did not perceive the stimulus. No responses were obtained from the deaf ear of unilaterally hearing impaired subjects, nor in the insert-out-of-the-ear condition in all the subjects, showing that Speech ABR reflects the functioning of the auditory pathways.

  7. The selective role of premotor cortex in speech perception: a contribution to phoneme judgements but not speech comprehension.

    Science.gov (United States)

    Krieger-Redwood, Katya; Gaskell, M Gareth; Lindsay, Shane; Jefferies, Elizabeth

    2013-12-01

    Several accounts of speech perception propose that the areas involved in producing language are also involved in perceiving it. In line with this view, neuroimaging studies show activation of premotor cortex (PMC) during phoneme judgment tasks; however, there is debate about whether speech perception necessarily involves motor processes, across all task contexts, or whether the contribution of PMC is restricted to tasks requiring explicit phoneme awareness. Some aspects of speech processing, such as mapping sounds onto meaning, may proceed without the involvement of motor speech areas if PMC specifically contributes to the manipulation and categorical perception of phonemes. We applied TMS to three sites-PMC, posterior superior temporal gyrus, and occipital pole-and for the first time within the TMS literature, directly contrasted two speech perception tasks that required explicit phoneme decisions and mapping of speech sounds onto semantic categories, respectively. TMS to PMC disrupted explicit phonological judgments but not access to meaning for the same speech stimuli. TMS to two further sites confirmed that this pattern was site specific and did not reflect a generic difference in the susceptibility of our experimental tasks to TMS: stimulation of pSTG, a site involved in auditory processing, disrupted performance in both language tasks, whereas stimulation of occipital pole had no effect on performance in either task. These findings demonstrate that, although PMC is important for explicit phonological judgments, crucially, PMC is not necessary for mapping speech onto meanings.

  8. Acquirement and enhancement of remote speech signals

    Science.gov (United States)

    Lü, Tao; Guo, Jin; Zhang, He-yong; Yan, Chun-hui; Wang, Can-jin

    2017-07-01

    To address the challenges of non-cooperative and remote acoustic detection, an all-fiber laser Doppler vibrometer (LDV) is established. The all-fiber LDV system can offer the advantages of smaller size, lightweight design and robust structure, hence it is a better fit for remote speech detection. In order to improve the performance and the efficiency of LDV for long-range hearing, the speech enhancement technology based on optimally modified log-spectral amplitude (OM-LSA) algorithm is used. The experimental results show that the comprehensible speech signals within the range of 150 m can be obtained by the proposed LDV. The signal-to-noise ratio ( SNR) and mean opinion score ( MOS) of the LDV speech signal can be increased by 100% and 27%, respectively, by using the speech enhancement technology. This all-fiber LDV, which combines the speech enhancement technology, can meet the practical demand in engineering.

  9. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  10. Monkey Lipsmacking Develops Like the Human Speech Rhythm

    Science.gov (United States)

    Morrill, Ryan J.; Paukner, Annika; Ferrari, Pier F.; Ghazanfar, Asif A.

    2012-01-01

    Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved "de novo" in humans. An alternative account--the one we explored here--is that the rhythm of speech evolved through the modification of rhythmic facial…

  11. Understanding the Linguistic Characteristics of the Great Speeches

    OpenAIRE

    Mouritzen, Kristian

    2016-01-01

    This dissertation attempts to find the common traits of great speeches. It does so by closely examining the language of some of the most well-known speeches in world. These speeches are presented in the book Speeches that Changed the World (2006) by Simon Sebag Montefiore. The dissertation specifically looks at four variables: The beginnings and endings of the speeches, the use of passive voice, the use of personal pronouns and the difficulty of the language. These four variables are based on...

  12. Speech spectrum envelope modeling

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Vondra, Martin

    Vol. 4775, - (2007), s. 129-137 ISSN 0302-9743. [COST Action 2102 International Workshop. Vietri sul Mare, 29.03.2007-31.03.2007] R&D Projects: GA AV ČR(CZ) 1ET301710509 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech * speech processing * cepstral analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.302, year: 2005

  13. Speech emotion recognition methods: A literature review

    Science.gov (United States)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  14. Speech neglect: A strange educational blind spot

    Science.gov (United States)

    Harris, Katherine Safford

    2005-09-01

    Speaking is universally acknowledged as an important human talent, yet as a topic of educated common knowledge, it is peculiarly neglected. Partly, this is a consequence of the relatively recent growth of research on speech perception, production, and development, but also a function of the way that information is sliced up by undergraduate colleges. Although the basic acoustic mechanism of vowel production was known to Helmholtz, the ability to view speech production as a physiological event is evolving even now with such techniques as fMRI. Intensive research on speech perception emerged only in the early 1930s as Fletcher and the engineers at Bell Telephone Laboratories developed the transmission of speech over telephone lines. The study of speech development was revolutionized by the papers of Eimas and his colleagues on speech perception in infants in the 1970s. Dissemination of knowledge in these fields is the responsibility of no single academic discipline. It forms a center for two departments, Linguistics, and Speech and Hearing, but in the former, there is a heavy emphasis on other aspects of language than speech and, in the latter, a focus on clinical practice. For psychologists, it is a rather minor component of a very diverse assembly of topics. I will focus on these three fields in proposing possible remedies.

  15. Automatic Speech Recognition from Neural Signals: A Focused Review

    Directory of Open Access Journals (Sweden)

    Christian Herff

    2016-09-01

    Full Text Available Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e.~patients suffering from locked-in syndrome. For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people.This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography. As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the emph{Brain-to-text} system.

  16. An evaluation of speech production in two boys with neurodevelopmental disorders who received communication intervention with a speech-generating device.

    Science.gov (United States)

    Roche, Laura; Sigafoos, Jeff; Lancioni, Giulio E; O'Reilly, Mark F; Schlosser, Ralf W; Stevens, Michelle; van der Meer, Larah; Achmadi, Donna; Kagohara, Debora; James, Ruth; Carnett, Amarie; Hodis, Flaviu; Green, Vanessa A; Sutherland, Dean; Lang, Russell; Rispoli, Mandy; Machalicek, Wendy; Marschik, Peter B

    2014-11-01

    Children with neurodevelopmental disorders often present with little or no speech. Augmentative and alternative communication (AAC) aims to promote functional communication using non-speech modes, but it might also influence natural speech production. To investigate this possibility, we provided AAC intervention to two boys with neurodevelopmental disorders and severe communication impairment. Intervention focused on teaching the boys to use a tablet computer-based speech-generating device (SGD) to request preferred stimuli. During SGD intervention, both boys began to utter relevant single words. In an effort to induce more speech, and investigate the relation between SGD availability and natural speech production, the SGD was removed during some requesting opportunities. With intervention, both participants learned to use the SGD to request preferred stimuli. After learning to use the SGD, both participants began to respond more frequently with natural speech when the SGD was removed. The results suggest that a rehabilitation program involving initial SGD intervention, followed by subsequent withdrawal of the SGD, might increase the frequency of natural speech production in some children with neurodevelopmental disorders. This effect could be an example of response generalization. Copyright © 2014 ISDN. Published by Elsevier Ltd. All rights reserved.

  17. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  18. Developmental profile of speech-language and communicative functions in an individual with the preserved speech variant of Rett syndrome.

    Science.gov (United States)

    Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2014-08-01

    We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.

  19. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  20. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  1. Speech-in-speech perception and executive function involvement.

    Directory of Open Access Journals (Sweden)

    Marcela Perrone-Bertolotti

    Full Text Available This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation. The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities.

  2. Infants' brain responses to speech suggest analysis by synthesis.

    Science.gov (United States)

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-05

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  3. The interpersonal level in English: reported speech

    NARCIS (Netherlands)

    Keizer, E.

    2009-01-01

    The aim of this article is to describe and classify a number of different forms of English reported speech (or thought), and subsequently to analyze and represent them within the theory of FDG. First, the most prototypical forms of reported speech are discussed (direct and indirect speech);

  4. Cognitive Functions in Childhood Apraxia of Speech

    Science.gov (United States)

    Nijland, Lian; Terband, Hayo; Maassen, Ben

    2015-01-01

    Purpose: Childhood apraxia of speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional problems. Method: Cognitive functions were investigated…

  5. Age-Related Differences in Speech Rate Perception Do Not Necessarily Entail Age-Related Differences in Speech Rate Use

    Science.gov (United States)

    Heffner, Christopher C.; Newman, Rochelle S.; Dilley, Laura C.; Idsardi, William J.

    2015-01-01

    Purpose: A new literature has suggested that speech rate can influence the parsing of words quite strongly in speech. The purpose of this study was to investigate differences between younger adults and older adults in the use of context speech rate in word segmentation, given that older adults perceive timing information differently from younger…

  6. Primary progressive aphasia and apraxia of speech.

    Science.gov (United States)

    Jung, Youngsin; Duffy, Joseph R; Josephs, Keith A

    2013-09-01

    Primary progressive aphasia is a neurodegenerative syndrome characterized by progressive language dysfunction. The majority of primary progressive aphasia cases can be classified into three subtypes: nonfluent/agrammatic, semantic, and logopenic variants. Each variant presents with unique clinical features, and is associated with distinctive underlying pathology and neuroimaging findings. Unlike primary progressive aphasia, apraxia of speech is a disorder that involves inaccurate production of sounds secondary to impaired planning or programming of speech movements. Primary progressive apraxia of speech is a neurodegenerative form of apraxia of speech, and it should be distinguished from primary progressive aphasia given its discrete clinicopathological presentation. Recently, there have been substantial advances in our understanding of these speech and language disorders. The clinical, neuroimaging, and histopathological features of primary progressive aphasia and apraxia of speech are reviewed in this article. The distinctions among these disorders for accurate diagnosis are increasingly important from a prognostic and therapeutic standpoint. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  7. Optimizing acoustical conditions for speech intelligibility in classrooms

    Science.gov (United States)

    Yang, Wonyoung

    High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with

  8. Internet Video Telephony Allows Speech Reading by Deaf Individuals and Improves Speech Perception by Cochlear Implant Users

    Science.gov (United States)

    Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D.; Senn, Pascal

    2013-01-01

    Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals. PMID:23359119

  9. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  10. The benefit obtained from visually displayed text from an automatic speech recognizer during listening to speech presented in noise

    NARCIS (Netherlands)

    Zekveld, A.A.; Kramer, S.E.; Kessens, J.M.; Vlaming, M.S.M.G.; Houtgast, T.

    2008-01-01

    OBJECTIVES: The aim of this study was to evaluate the benefit that listeners obtain from visually presented output from an automatic speech recognition (ASR) system during listening to speech in noise. DESIGN: Auditory-alone and audiovisual speech reception thresholds (SRTs) were measured. The SRT

  11. The Hierarchical Cortical Organization of Human Speech Processing.

    Science.gov (United States)

    de Heer, Wendy A; Huth, Alexander G; Griffiths, Thomas L; Gallant, Jack L; Theunissen, Frédéric E

    2017-07-05

    Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to

  12. Inner Speech: Development, Cognitive Functions, Phenomenology, and Neurobiology

    Science.gov (United States)

    2015-01-01

    Inner speech—also known as covert speech or verbal thinking—has been implicated in theories of cognitive development, speech monitoring, executive function, and psychopathology. Despite a growing body of knowledge on its phenomenology, development, and function, approaches to the scientific study of inner speech have remained diffuse and largely unintegrated. This review examines prominent theoretical approaches to inner speech and methodological challenges in its study, before reviewing current evidence on inner speech in children and adults from both typical and atypical populations. We conclude by considering prospects for an integrated cognitive science of inner speech, and present a multicomponent model of the phenomenon informed by developmental, cognitive, and psycholinguistic considerations. Despite its variability among individuals and across the life span, inner speech appears to perform significant functions in human cognition, which in some cases reflect its developmental origins and its sharing of resources with other cognitive processes. PMID:26011789

  13. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  14. Treating speech subsystems in childhood apraxia of speech with tactual input: the PROMPT approach.

    Science.gov (United States)

    Dale, Philip S; Hayden, Deborah A

    2013-11-01

    Prompts for Restructuring Oral Muscular Phonetic Targets (PROMPT; Hayden, 2004; Hayden, Eigen, Walker, & Olsen, 2010)-a treatment approach for the improvement of speech sound disorders in children-uses tactile-kinesthetic- proprioceptive (TKP) cues to support and shape movements of the oral articulators. No research to date has systematically examined the efficacy of PROMPT for children with childhood apraxia of speech (CAS). Four children (ages 3;6 [years;months] to 4;8), all meeting the American Speech-Language-Hearing Association (2007) criteria for CAS, were treated using PROMPT. All children received 8 weeks of 2 × per week treatment, including at least 4 weeks of full PROMPT treatment that included TKP cues. During the first 4 weeks, 2 of the 4 children received treatment that included all PROMPT components except TKP cues. This design permitted both between-subjects and within-subjects comparisons to evaluate the effect of TKP cues. Gains in treatment were measured by standardized tests and by criterion-referenced measures based on the production of untreated probe words, reflecting change in speech movements and auditory perceptual accuracy. All 4 children made significant gains during treatment, but measures of motor speech control and untreated word probes provided evidence for more gain when TKP cues were included. PROMPT as a whole appears to be effective for treating children with CAS, and the inclusion of TKP cues appears to facilitate greater effect.

  15. Interventions for Speech Sound Disorders in Children

    Science.gov (United States)

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  16. DEVELOPMENT AND DISORDERS OF SPEECH IN CHILDHOOD.

    Science.gov (United States)

    KARLIN, ISAAC W.; AND OTHERS

    THE GROWTH, DEVELOPMENT, AND ABNORMALITIES OF SPEECH IN CHILDHOOD ARE DESCRIBED IN THIS TEXT DESIGNED FOR PEDIATRICIANS, PSYCHOLOGISTS, EDUCATORS, MEDICAL STUDENTS, THERAPISTS, PATHOLOGISTS, AND PARENTS. THE NORMAL DEVELOPMENT OF SPEECH AND LANGUAGE IS DISCUSSED, INCLUDING THEORIES ON THE ORIGIN OF SPEECH IN MAN AND FACTORS INFLUENCING THE NORMAL…

  17. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ... digital filtering for noise cancellation which interfaces to speech recognition software. It uses auditory features in speech recognition training, and provides applications to multilingual spoken language translation...

  18. Describing Speech Usage in Daily Activities in Typical Adults.

    Science.gov (United States)

    Anderson, Laine; Baylor, Carolyn R; Eadie, Tanya L; Yorkston, Kathryn M

    2016-01-01

    "Speech usage" refers to what people want or need to do with their speech to meet communication demands in life roles. The purpose of this study was to contribute to validation of the Levels of Speech Usage scale by providing descriptive data from a sample of adults without communication disorders, comparing this scale to a published Occupational Voice Demands scale and examining predictors of speech usage levels. This is a survey design. Adults aged ≥25 years without reported communication disorders were recruited nationally to complete an online questionnaire. The questionnaire included the Levels of Speech Usage scale, questions about relevant occupational and nonoccupational activities (eg, socializing, hobbies, childcare, and so forth), and demographic information. Participants were also categorized according to Koufman and Isaacson occupational voice demands scale. A total of 276 participants completed the questionnaires. People who worked for pay tended to report higher levels of speech usage than those who do not work for pay. Regression analyses showed employment to be the major contributor to speech usage; however, considerable variance left unaccounted for suggests that determinants of speech usage and the relationship between speech usage, employment, and other life activities are not yet fully defined. The Levels of Speech Usage may be a viable instrument to systematically rate speech usage because it captures both occupational and nonoccupational speech demands. These data from a sample of typical adults may provide a reference to help in interpreting the impact of communication disorders on speech usage patterns. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. The Apraxia of Speech Rating Scale: a tool for diagnosis and description of apraxia of speech.

    Science.gov (United States)

    Strand, Edythe A; Duffy, Joseph R; Clark, Heather M; Josephs, Keith

    2014-01-01

    The purpose of this report is to describe an initial version of the Apraxia of Speech Rating Scale (ASRS), a scale designed to quantify the presence or absence, relative frequency, and severity of characteristics frequently associated with apraxia of speech (AOS). In this paper we report intra-judge and inter-judge reliability, as well as indices of validity, for the ASRS which was completed for 133 adult participants with a neurodegenerative speech or language disorder, 56 of whom had AOS. The overall inter-judge ICC among three clinicians was 0.94 for the total ASRS score and 0.91 for the number of AOS characteristics identified as present. Intra-judge ICC measures were high, ranging from 0.91 to 0.98. Validity was demonstrated on the basis of strong correlations with independent clinical diagnosis, as well as strong correlations of ASRS scores with independent clinical judgments of AOS severity. Results suggest that the ASRS is a potentially useful tool for documenting the presence and severity of characteristics of AOS. At this point in its development it has good potential for broader clinical use and for better subject description in AOS research. The Apraxia of Speech Rating Scale: A new tool for diagnosis and description of apraxia of speech 1. The reader will be able to explain characteristics of apraxia of speech. 2. The reader will be able to demonstrate use of a rating scale to document the presence and severity of speech characteristics. 3. The reader will be able to explain the reliability and validity of the ASRS. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. The chairman's speech

    International Nuclear Information System (INIS)

    Allen, A.M.

    1986-01-01

    The paper contains a transcript of a speech by the chairman of the UKAEA, to mark the publication of the 1985/6 annual report. The topics discussed in the speech include: the Chernobyl accident and its effect on public attitudes to nuclear power, management and disposal of radioactive waste, the operation of UKAEA as a trading fund, and the UKAEA development programmes. The development programmes include work on the following: fast reactor technology, thermal reactors, reactor safety, health and safety aspects of water cooled reactors, the Joint European Torus, and under-lying research. (U.K.)

  1. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same

  2. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results

  3. Spotlight on Speech Codes 2011: The State of Free Speech on Our Nation's Campuses

    Science.gov (United States)

    Foundation for Individual Rights in Education (NJ1), 2011

    2011-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a rigorous survey of restrictions on speech at America's colleges and universities. The survey and accompanying report explore the extent to which schools are meeting their legal and moral obligations to uphold students' and faculty members' rights to freedom of speech,…

  4. Spotlight on Speech Codes 2009: The State of Free Speech on Our Nation's Campuses

    Science.gov (United States)

    Foundation for Individual Rights in Education (NJ1), 2009

    2009-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a wide, detailed survey of restrictions on speech at America's colleges and universities. The survey and resulting report explore the extent to which schools are meeting their obligations to uphold students' and faculty members' rights to freedom of speech, freedom of…

  5. Spotlight on Speech Codes 2010: The State of Free Speech on Our Nation's Campuses

    Science.gov (United States)

    Foundation for Individual Rights in Education (NJ1), 2010

    2010-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a rigorous survey of restrictions on speech at America's colleges and universities. The survey and resulting report explore the extent to which schools are meeting their legal and moral obligations to uphold students' and faculty members' rights to freedom of speech,…

  6. A speech production model including the nasal Cavity

    DEFF Research Database (Denmark)

    Olesen, Morten

    In order to obtain articulatory analysis of speech production the model is improved. the standard model, as used in LPC analysis, to a large extent only models the acoustic properties of speech signal as opposed to articulatory modelling of the speech production. In spite of this the LPC model...... is by far the most widely used model in speech technology....

  7. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  8. Lip Movement Exaggerations during Infant-Directed Speech

    Science.gov (United States)

    Green, Jordan R.; Nip, Ignatius S. B.; Wilson, Erin M.; Mefferd, Antje S.; Yunusova, Yana

    2010-01-01

    Purpose: Although a growing body of literature has identified the positive effects of visual speech on speech and language learning, oral movements of infant-directed speech (IDS) have rarely been studied. This investigation used 3-dimensional motion capture technology to describe how mothers modify their lip movements when talking to their…

  9. 38 CFR 8.18 - Total disability-speech.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Total disability-speech... SERVICE LIFE INSURANCE Premium Waivers and Total Disability § 8.18 Total disability—speech. The organic loss of speech shall be deemed to be total disability under National Service Life Insurance. [67 FR...

  10. Normal Aspects of Speech, Hearing, and Language.

    Science.gov (United States)

    Minifie, Fred. D., Ed.; And Others

    This book is written as a guide to the understanding of the processes involved in human speech communication. Ten authorities contributed material to provide an introduction to the physiological aspects of speech production and reception, the acoustical aspects of speech production and transmission, the psychophysics of sound reception, the nature…

  11. Electrophysiological evidence for speech-specific audiovisual integration

    NARCIS (Netherlands)

    Baart, M.; Stekelenburg, J.J.; Vroomen, J.

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were

  12. High-frequency energy in singing and speech

    Science.gov (United States)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  13. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor

    Directory of Open Access Journals (Sweden)

    Heracleous Panikos

    2007-01-01

    Full Text Available We present the use of stethoscope and silicon NAM (nonaudible murmur microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible speech, but also very quietly uttered speech (nonaudible murmur. As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc. for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.

  14. Detection of target phonemes in spontaneous and read speech

    NARCIS (Netherlands)

    Mehta, G.; Cutler, A.

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ

  15. Processing changes when listening to foreign-accented speech

    Directory of Open Access Journals (Sweden)

    Carlos eRomero-Rivas

    2015-03-01

    Full Text Available This study investigates the mechanisms responsible for fast changes in processing foreign-accented speech. Event Related brain Potentials (ERPs were obtained while native speakers of Spanish listened to native and foreign-accented speakers of Spanish. We observed a less positive P200 component for foreign-accented speech relative to native speech comprehension. This suggests that the extraction of spectral information and other important acoustic features was hampered during foreign-accented speech comprehension. However, the amplitude of the N400 component for foreign-accented speech comprehension decreased across the experiment, suggesting the use of a higher level, lexical mechanism. Furthermore, during native speech comprehension, semantic violations in the critical words elicited an N400 effect followed by a late positivity. During foreign-accented speech comprehension, semantic violations only elicited an N400 effect. Overall, our results suggest that, despite a lack of improvement in phonetic discrimination, native listeners experience changes at lexical-semantic levels of processing after brief exposure to foreign-accented speech. Moreover, these results suggest that lexical access, semantic integration and linguistic re-analysis processes are permeable to external factors, such as the accent of the speaker.

  16. Gender and Speech in a Disney Princess Movie

    Directory of Open Access Journals (Sweden)

    Azmi N.J.

    2016-11-01

    Full Text Available One of the latest Disney princess movies is Frozen which was released in 2013. Female characters in Frozen differ from the female characters in previous Disney movies, such as The Little Mermaid and Tangled. In comparison, female characters in Frozen are portrayed as having more heroic values and norms, which makes it interesting to examine their speech characteristics. Do they use typical female speech despite having more heroic characteristics? This paper aims to provide insights into the female speech characteristics in this movie based on Lakoff’s (1975 model of female speech.  Data analysis shows that female and male characters in the movie used almost equal number of female speech elements in their dialogues. Interestingly, although female characters in the movie do not behave stereotypically, their speech still contain the elements of female speech, such as the use empty adjectives, questions, hedges and intensifier. This paper argues that the blurring of boundaries between male and female speech characteristics in this movie is an attempt to break gender stereotyping by showing that female characters share similar characteristics with heroic male characters thus they should not be seen as inferior to the male  characters.

  17. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation.

    Science.gov (United States)

    Gaudrain, Etienne; Carlyon, Robert P

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.

  18. Methods of analysis speech rate: a pilot study.

    Science.gov (United States)

    Costa, Luanna Maria Oliveira; Martins-Reis, Vanessa de Oliveira; Celeste, Letícia Côrrea

    2016-01-01

    To describe the performance of fluent adults in different measures of speech rate. The study included 24 fluent adults, of both genders, speakers of Brazilian Portuguese, who were born and still living in the metropolitan region of Belo Horizonte, state of Minas Gerais, aged between 18 and 59 years. Participants were grouped by age: G1 (18-29 years), G2 (30-39 years), G3 (40-49 years), and G4 (50-59 years). The speech samples were obtained following the methodology of the Speech Fluency Assessment Protocol. In addition to the measures of speech rate proposed by the protocol (speech rate in words and syllables per minute), the rate of speech into phonemes per second and the articulation rate with and without the disfluencies were calculated. We used the nonparametric Friedman test and the Wilcoxon test for multiple comparisons. Groups were compared using the nonparametric Kruskal Wallis. The significance level was of 5%. There were significant differences between measures of speech rate involving syllables. The multiple comparisons showed that all the three measures were different. There was no effect of age for the studied measures. These findings corroborate previous studies. The inclusion of temporal acoustic measures such as speech rate in phonemes per second and articulation rates with and without disfluencies can be a complementary approach in the evaluation of speech rate.

  19. Disturbance rejection performance analyses of closed loop control systems by reference to disturbance ratio.

    Science.gov (United States)

    Alagoz, Baris Baykant; Deniz, Furkan Nur; Keles, Cemal; Tan, Nusret

    2015-03-01

    This study investigates disturbance rejection capacity of closed loop control systems by means of reference to disturbance ratio (RDR). The RDR analysis calculates the ratio of reference signal energy to disturbance signal energy at the system output and provides a quantitative evaluation of disturbance rejection performance of control systems on the bases of communication channel limitations. Essentially, RDR provides a straightforward analytical method for the comparison and improvement of implicit disturbance rejection capacity of closed loop control systems. Theoretical analyses demonstrate us that RDR of the negative feedback closed loop control systems are determined by energy spectral density of controller transfer function. In this manner, authors derived design criteria for specifications of disturbance rejection performances of PID and fractional order PID (FOPID) controller structures. RDR spectra are calculated for investigation of frequency dependence of disturbance rejection capacity and spectral RDR analyses are carried out for PID and FOPID controllers. For the validation of theoretical results, simulation examples are presented. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Particularities of Speech Readiness for Schooling in Pre-School Children Having General Speech Underdevelopment: A Social and Pedagogical Aspect

    Science.gov (United States)

    Emelyanova, Irina A.; Borisova, Elena A.; Shapovalova, Olga E.; Karynbaeva, Olga V.; Vorotilkina, Irina M.

    2018-01-01

    The relevance of the research is due to the necessity of creating the pedagogical conditions for correction and development of speech in children having the general speech underdevelopment. For them, difficulties generating a coherent utterance are characteristic, which prevents a sufficient speech readiness for schooling forming in them as well…

  1. Speech, Language, and Reading in 10-Year-Olds With Cleft: Associations With Teasing, Satisfaction With Speech, and Psychological Adjustment.

    Science.gov (United States)

    Feragen, Kristin Billaud; Særvold, Tone Kristin; Aukner, Ragnhild; Stock, Nicola Marie

    2017-03-01

      Despite the use of multidisciplinary services, little research has addressed issues involved in the care of those with cleft lip and/or palate across disciplines. The aim was to investigate associations between speech, language, reading, and reports of teasing, subjective satisfaction with speech, and psychological adjustment.   Cross-sectional data collected during routine, multidisciplinary assessments in a centralized treatment setting, including speech and language therapists and clinical psychologists.   Children with cleft with palatal involvement aged 10 years from three birth cohorts (N = 170) and their parents.   Speech: SVANTE-N. Language: Language 6-16 (sentence recall, serial recall, vocabulary, and phonological awareness). Reading: Word Chain Test and Reading Comprehension Test. Psychological measures: Strengths and Difficulties Questionnaire and extracts from the Satisfaction With Appearance Scale and Child Experience Questionnaire.   Reading skills were associated with self- and parent-reported psychological adjustment in the child. Subjective satisfaction with speech was associated with psychological adjustment, while not being consistently associated with speech therapists' assessments. Parent-reported teasing was found to be associated with lower levels of reading skills. Having a medical and/or psychological condition in addition to the cleft was found to affect speech, language, and reading significantly.   Cleft teams need to be aware of speech, language, and/or reading problems as potential indicators of psychological risk in children with cleft. This study highlights the importance of multiple reports (self, parent, and specialist) and a multidisciplinary approach to cleft care and research.

  2. Robust digital processing of speech signals

    CERN Document Server

    Kovacevic, Branko; Veinović, Mladen; Marković, Milan

    2017-01-01

    This book focuses on speech signal phenomena, presenting a robustification of the usual speech generation models with regard to the presumed types of excitation signals, which is equivalent to the introduction of a class of nonlinear models and the corresponding criterion functions for parameter estimation. Compared to the general class of nonlinear models, such as various neural networks, these models possess good properties of controlled complexity, the option of working in “online” mode, as well as a low information volume for efficient speech encoding and transmission. Providing comprehensive insights, the book is based on the authors’ research, which has already been published, supplemented by additional texts discussing general considerations of speech modeling, linear predictive analysis and robust parameter estimation.

  3. Ultrasound applicability in Speech Language Pathology and Audiology

    OpenAIRE

    Barberena,Luciana da Silva; Brasil,Brunah de Castro; Melo,Roberta Michelon; Mezzomo,Carolina Lisbôa; Mota,Helena Bolli; Keske-Soares,Márcia

    2014-01-01

    PURPOSE: To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. RESEARCH STRATEGY: A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Pathology and...

  4. Ultrasound applicability in Speech Language Pathology and Audiology

    OpenAIRE

    Barberena, Luciana da Silva; Brasil, Brunah de Castro; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli; Keske-Soares, Márcia

    2014-01-01

    PURPOSE: To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. RESEARCH STRATEGY: A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Patholog...

  5. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    Science.gov (United States)

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained

  6. Speech Prosody in Cerebellar Ataxia

    Science.gov (United States)

    Casper, Maureen A.; Raphael, Lawrence J.; Harris, Katherine S.; Geibel, Jennifer M.

    2007-01-01

    Persons with cerebellar ataxia exhibit changes in physical coordination and speech and voice production. Previously, these alterations of speech and voice production were described primarily via perceptual coordinates. In this study, the spatial-temporal properties of syllable production were examined in 12 speakers, six of whom were healthy…

  7. Acquired apraxia of speech: features, accounts, and treatment.

    Science.gov (United States)

    Peach, Richard K

    2004-01-01

    The features of apraxia of speech (AOS) are presented with regard to both traditional and contemporary descriptions of the disorder. Models of speech processing, including the neurological bases for apraxia of speech, are discussed. Recent findings concerning subcortical contributions to apraxia of speech and the role of the insula are presented. The key features to differentially diagnose AOS from related speech syndromes are identified. Treatment implications derived from motor accounts of AOS are presented along with a summary of current approaches designed to treat the various subcomponents of the disorder. Finally, guidelines are provided for treating the AOS patient with coexisting aphasia.

  8. Speech enhancement in the Karhunen-Loeve expansion domain

    CERN Document Server

    Benesty, Jacob

    2011-01-01

    This book is devoted to the study of the problem of speech enhancement whose objective is the recovery of a signal of interest (i.e., speech) from noisy observations. Typically, the recovery process is accomplished by passing the noisy observations through a linear filter (or a linear transformation). Since both the desired speech and undesired noise are filtered at the same time, the most critical issue of speech enhancement resides in how to design a proper optimal filter that can fully take advantage of the difference between the speech and noise statistics to mitigate the noise effect as m

  9. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  10. Speech recognition using articulatory and excitation source features

    CERN Document Server

    Rao, K Sreenivasa

    2017-01-01

    This book discusses the contribution of articulatory and excitation source information in discriminating sound units. The authors focus on excitation source component of speech -- and the dynamics of various articulators during speech production -- for enhancement of speech recognition (SR) performance. Speech recognition is analyzed for read, extempore, and conversation modes of speech. Five groups of articulatory features (AFs) are explored for speech recognition, in addition to conventional spectral features. Each chapter provides the motivation for exploring the specific feature for SR task, discusses the methods to extract those features, and finally suggests appropriate models to capture the sound unit specific knowledge from the proposed features. The authors close by discussing various combinations of spectral, articulatory and source features, and the desired models to enhance the performance of SR systems.

  11. Detection of target phonemes in spontaneous and read speech

    OpenAIRE

    Mehta, G.; Cutler, A.

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonem...

  12. Effects of human fatigue on speech signals

    Science.gov (United States)

    Stamoulis, Catherine

    2004-05-01

    Cognitive performance may be significantly affected by fatigue. In the case of critical personnel, such as pilots, monitoring human fatigue is essential to ensure safety and success of a given operation. One of the modalities that may be used for this purpose is speech, which is sensitive to respiratory changes and increased muscle tension of vocal cords, induced by fatigue. Age, gender, vocal tract length, physical and emotional state may significantly alter speech intensity, duration, rhythm, and spectral characteristics. In addition to changes in speech rhythm, fatigue may also affect the quality of speech, such as articulation. In a noisy environment, detecting fatigue-related changes in speech signals, particularly subtle changes at the onset of fatigue, may be difficult. Therefore, in a performance-monitoring system, speech parameters which are significantly affected by fatigue need to be identified and extracted from input signals. For this purpose, a series of experiments was performed under slowly varying cognitive load conditions and at different times of the day. The results of the data analysis are presented here.

  13. [Language disorders in a right frontal lesion in a right-handed patient. Incoherent speech and extravagant paraphasias. Neuropsychologic study].

    Science.gov (United States)

    Guard, O; Fournet, F; Sautreaux, J L; Dumas, R

    1983-01-01

    Clinical, neuropsychological, and CT scan data are reported in a patient with a right prefrontal hematoma following meningeal hemorrhage due to the rupture of an aneurysm of the anterior communicating artery. Over a period of six weeks, before and after surgery, the patient presented a particular type of language disorder characterized by incoherent speech, verbal paraphasias, unexpected or guided along ideic perseverations, emphatic and affected terms, and impossibility of brief responses, particularly in denomination tests. Contrasting with the absurdity of the discourse, the respect of oral comprehension, the absence of grammatical disorders, and the perfect phonemic and phonetic organization provided evidence of the integrity of the linguistic code. The purely semantic disturbance, however, was the cause of the apparent alteration in reasoning and judgment. A major amnestic syndrome was also present. It improved concomitantly with the language disorders. The explanation proposed is that of a disturbance of an attention process and of word selection due to a prefrontal lesion.

  14. A Motor Speech Assessment for Children with Severe Speech Disorders: Reliability and Validity Evidence

    Science.gov (United States)

    Strand, Edythe A.; McCauley, Rebecca J.; Weigand, Stephen D.; Stoeckel, Ruth E.; Baas, Becky S.

    2013-01-01

    Purpose: In this article, the authors report reliability and validity evidence for the Dynamic Evaluation of Motor Speech Skill (DEMSS), a new test that uses dynamic assessment to aid in the differential diagnosis of childhood apraxia of speech (CAS). Method: Participants were 81 children between 36 and 79 months of age who were referred to the…

  15. Objective voice and speech analysis of persons with chronic hoarseness by prosodic analysis of speech samples.

    Science.gov (United States)

    Haderlein, Tino; Döllinger, Michael; Matoušek, Václav; Nöth, Elmar

    2016-10-01

    Automatic voice assessment is often performed using sustained vowels. In contrast, speech analysis of read-out texts can be applied to voice and speech assessment. Automatic speech recognition and prosodic analysis were used to find regression formulae between automatic and perceptual assessment of four voice and four speech criteria. The regression was trained with 21 men and 62 women (average age 49.2 years) and tested with another set of 24 men and 49 women (48.3 years), all suffering from chronic hoarseness. They read the text 'Der Nordwind und die Sonne' ('The North Wind and the Sun'). Five voice and speech therapists evaluated the data on 5-point Likert scales. Ten prosodic and recognition accuracy measures (features) were identified which describe all the examined criteria. Inter-rater correlation within the expert group was between r = 0.63 for the criterion 'match of breath and sense units' and r = 0.87 for the overall voice quality. Human-machine correlation was between r = 0.40 for the match of breath and sense units and r = 0.82 for intelligibility. The perceptual ratings of different criteria were highly correlated with each other. Likewise, the feature sets modeling the criteria were very similar. The automatic method is suitable for assessing chronic hoarseness in general and for subgroups of functional and organic dysphonia. In its current version, it is almost as reliable as a randomly picked rater from a group of voice and speech therapists.

  16. Memory for speech and speech for memory.

    Science.gov (United States)

    Locke, J L; Kutz, K J

    1975-03-01

    Thirty kindergarteners, 15 who substituted /w/ for /r/ and 15 with correct articulation, received two perception tests and a memory test that included /w/ and /r/ in minimally contrastive syllables. Although both groups had nearly perfect perception of the experimenter's productions of /w/ and /r/, misarticulating subjects perceived their own tape-recorded w/r productions as /w/. In the memory task these same misarticulating subjects committed significantly more /w/-/r/ confusions in unspoken recall. The discussion considers why people subvocally rehearse; a developmental period in which children do not rehearse; ways subvocalization may aid recall, including motor and acoustic encoding; an echoic store that provides additional recall support if subjects rehearse vocally, and perception of self- and other- produced phonemes by misarticulating children-including its relevance to a motor theory of perception. Evidence is presented that speech for memory can be sufficiently impaired to cause memory disorder. Conceptions that restrict speech disorder to an impairment of communication are challenged.

  17. Disturbance recording system

    International Nuclear Information System (INIS)

    Chandra, A.K.; Deshpande, S.V.; Mayya, A.; Vaidya, U.W.; Premraj, M.K.; Patil, N.B.

    1994-01-01

    A computerized system for disturbance monitoring, recording and display has been developed for use in nuclear power plants and is versatile enough to be used where ever a large number of parameters need to be recorded, e.g. conventional power plants, chemical industry etc. The Disturbance Recording System (DRS) has been designed to continuously monitor a process plant and record crucial parameters. The DRS provides a centralized facility to monitor and continuously record 64 process parameters scanned every 1 sec for 5 days. The system also provides facility for storage of 64 parameters scanned every 200 msec during 2 minutes prior to and 3 minutes after a disturbance. In addition the system can initiate, on demand, the recording of 8 parameters at a fast rate of every 5 msec for a period of 5 sec. and thus act as a visicorder. All this data is recorded in non-volatile memory and can be displayed, printed/plotted and used for subsequent analysis. Since data can be stored densely on floppy disks, the volume of space required for archival storage is also low. As a disturbance recorder, the DRS allows the operator to view the state of the plant prior to occurrence of the disturbance and helps in identifying the root cause. (author). 10 refs., 7 figs

  18. Speech Training for Inmate Rehabilitation.

    Science.gov (United States)

    Parkinson, Michael G.; Dobkins, David H.

    1982-01-01

    Using a computerized content analysis, the authors demonstrate changes in speech behaviors of prison inmates. They conclude that two to four hours of public speaking training can have only limited effect on students who live in a culture in which "prison speech" is the expected and rewarded form of behavior. (PD)

  19. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Some of the history of gradual infusion of the modulation spectrum concept into Automatic recognition of speech (ASR) comes next, pointing to the relationship of modulation spectrum processing to wellaccepted ASR techniques such as dynamic speech features or RelAtive SpecTrAl (RASTA) filtering. Next, the frequency ...

  20. Impact of speech-generating devices on the language development of a child with childhood apraxia of speech: a case study.

    Science.gov (United States)

    Lüke, Carina

    2016-01-01

    The purpose of the study was to evaluate the effectiveness of speech-generating devices (SGDs) on the communication and language development of a 2-year-old boy with severe childhood apraxia of speech (CAS). An A-B design was used over a treatment period of 1 year, followed by three additional follow-up measurements, in order to evaluate the implementation of SGDs in the speech therapy of a 2;7-year-old boy with severe CAS. In total, 53 therapy sessions were videotaped and analyzed to better understand his communicative (operationalized as means of communication) and linguistic (operationalized as intelligibility and consistency of speech-productions, lexical and grammatical development) development. The trend-lines of baseline phase A and intervention phase B were compared and percentage of non-overlapping data points were calculated to verify the value of the intervention. The use of SGDs led to an immediate increase in the communicative development of the child. An increase in all linguistic variables was observed, with a latency effect of eight to nine treatment sessions. The implementation of SGDs in speech therapy has the potential to be highly effective in regards to both communicative and linguistic competencies in young children with severe CAS. Implications for Rehabilitation Childhood apraxia of speech (CAS) is a neurological speech sound disorder which results in significant deficits in speech production and lead to a higher risk for language, reading and spelling difficulties. Speech-generating devices (SGD), as one method of augmentative and alternative communication (AAC), can effectively enhance the communicative and linguistic development of children with severe CAS.

  1. Theoretical Value in Teaching Freedom of Speech.

    Science.gov (United States)

    Carney, John J., Jr.

    The exercise of freedom of speech within our nation has deteriorated. A practical value in teaching free speech is the possibility of restoring a commitment to its principles by educators. What must be taught is why freedom of speech is important, why it has been compromised, and the extent to which it has been compromised. Every technological…

  2. Speech profile of patients undergoing primary palatoplasty.

    Science.gov (United States)

    Menegueti, Katia Ignacio; Mangilli, Laura Davison; Alonso, Nivaldo; Andrade, Claudia Regina Furquim de

    2017-10-26

    To characterize the profile and speech characteristics of patients undergoing primary palatoplasty in a Brazilian university hospital, considering the time of intervention (early, before two years of age; late, after two years of age). Participants were 97 patients of both genders with cleft palate and/or cleft and lip palate, assigned to the Speech-language Pathology Department, who had been submitted to primary palatoplasty and presented no prior history of speech-language therapy. Patients were divided into two groups: early intervention group (EIG) - 43 patients undergoing primary palatoplasty before 2 years of age and late intervention group (LIG) - 54 patients undergoing primary palatoplasty after 2 years of age. All patients underwent speech-language pathology assessment. The following parameters were assessed: resonance classification, presence of nasal turbulence, presence of weak intraoral air pressure, presence of audible nasal air emission, speech understandability, and compensatory articulation disorder (CAD). At statistical significance level of 5% (p≤0.05), no significant difference was observed between the groups in the following parameters: resonance classification (p=0.067); level of hypernasality (p=0.113), presence of nasal turbulence (p=0.179); presence of weak intraoral air pressure (p=0.152); presence of nasal air emission (p=0.369), and speech understandability (p=0.113). The groups differed with respect to presence of compensatory articulation disorders (p=0.020), with the LIG presenting higher occurrence of altered phonemes. It was possible to assess the general profile and speech characteristics of the study participants. Patients submitted to early primary palatoplasty present better speech profile.

  3. Non-fluent speech following stroke is caused by impaired efference copy.

    Science.gov (United States)

    Feenaughty, Lynda; Basilakos, Alexandra; Bonilha, Leonardo; den Ouden, Dirk-Bart; Rorden, Chris; Stark, Brielle; Fridriksson, Julius

    2017-09-01

    Efference copy is a cognitive mechanism argued to be critical for initiating and monitoring speech: however, the extent to which breakdown of efference copy mechanisms impact speech production is unclear. This study examined the best mechanistic predictors of non-fluent speech among 88 stroke survivors. Objective speech fluency measures were subjected to a principal component analysis (PCA). The primary PCA factor was then entered into a multiple stepwise linear regression analysis as the dependent variable, with a set of independent mechanistic variables. Participants' ability to mimic audio-visual speech ("speech entrainment response") was the best independent predictor of non-fluent speech. We suggest that this "speech entrainment" factor reflects integrity of internal monitoring (i.e., efference copy) of speech production, which affects speech initiation and maintenance. Results support models of normal speech production and suggest that therapy focused on speech initiation and maintenance may improve speech fluency for individuals with chronic non-fluent aphasia post stroke.

  4. Stability and composition of functional synergies for speech movements in children with developmental speech disorders

    NARCIS (Netherlands)

    Terband, H.; Maassen, B.; van Lieshout, P.; Nijland, L.

    2011-01-01

    The aim of this study was to investigate the consistency and composition of functional synergies for speech movements in children with developmental speech disorders. Kinematic data were collected on the reiterated productions of syllables spa (/spa:/) and paas (/pa:s/) by 10 6- to 9-year-olds with

  5. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  6. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  7. Speech to Text Software Evaluation Report

    CERN Document Server

    Martins Santo, Ana Luisa

    2017-01-01

    This document compares out-of-box performance of three commercially available speech recognition software: Vocapia VoxSigma TM , Google Cloud Speech, and Lime- craft Transcriber. It is defined a set of evaluation criteria and test methods for speech recognition softwares. The evaluation of these softwares in noisy environments are also included for the testing purposes. Recognition accuracy was compared using noisy environments and languages. Testing in ”ideal” non-noisy environment of a quiet room has been also performed for comparison.

  8. The Influence of Direct and Indirect Speech on Source Memory

    Directory of Open Access Journals (Sweden)

    Anita Eerland

    2018-02-01

    Full Text Available People perceive the same situation described in direct speech (e.g., John said, “I like the food at this restaurant” as more vivid and perceptually engaging than described in indirect speech (e.g., John said that he likes the food at the restaurant. So, if direct speech enhances the perception of vividness relative to indirect speech, what are the effects of using indirect speech? In four experiments, we examined whether the use of direct and indirect speech influences the comprehender’s memory for the identity of the speaker. Participants read a direct or an indirect speech version of a story and then addressed statements to one of the four protagonists of the story in a memory task. We found better source memory at the level of protagonist gender after indirect than direct speech (Exp. 1–3. When the story was rewritten to make the protagonists more distinctive, we also found an effect of speech type on source memory at the level of the individual, with better memory after indirect than direct speech (Exp. 3–4. Memory for the content of the story, however, was not influenced by speech type (Exp. 4. While previous research showed that direct speech may enhance memory for how something was said, we conclude that indirect speech enhances memory for who said what.

  9. Recognizing intentions in infant-directed speech: evidence for universals.

    Science.gov (United States)

    Bryant, Gregory A; Barrett, H Clark

    2007-08-01

    In all languages studied to date, distinct prosodic contours characterize different intention categories of infant-directed (ID) speech. This vocal behavior likely exists universally as a species-typical trait, but little research has examined whether listeners can accurately recognize intentions in ID speech using only vocal cues, without access to semantic information. We recorded native-English-speaking mothers producing four intention categories of utterances (prohibition, approval, comfort, and attention) as both ID and adult-directed (AD) speech, and we then presented the utterances to Shuar adults (South American hunter-horticulturalists). Shuar subjects were able to reliably distinguish ID from AD speech and were able to reliably recognize the intention categories in both types of speech, although performance was significantly better with ID speech. This is the first demonstration that adult listeners in an indigenous, nonindustrialized, and nonliterate culture can accurately infer intentions from both ID speech and AD speech in a language they do not speak.

  10. 75 FR 67333 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-11-02

    ... Docket No. 10-191; FCC 10-161] Telecommunications Relay Services and Speech-to-Speech Services for...-Based Telecommunications Relay Service Numbering AGENCY: Federal Communications Commission. ACTION... Internet-based Telecommunications Relay Service (iTRS), specifically, Video Relay Service (VRS) and IP...

  11. 75 FR 29914 - Telecommunications Relay Services, Speech-to-Speech Services, E911 Requirements for IP-Enabled...

    Science.gov (United States)

    2010-05-28

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 64 [CG Docket No. 03-123; WC Docket No. 05-196; FCC 08-275] Telecommunications Relay Services, Speech-to-Speech Services, E911 Requirements for IP... with the Commission's Telecommunications Relay Services, [[Page 29915

  12. Out-of-synchrony speech entrainment in developmental dyslexia.

    Science.gov (United States)

    Molinaro, Nicola; Lizarazu, Mikel; Lallier, Marie; Bourguignon, Mathieu; Carreiras, Manuel

    2016-08-01

    Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Ultrasound applicability in Speech Language Pathology and Audiology.

    Science.gov (United States)

    Barberena, Luciana da Silva; Brasil, Brunah de Castro; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli; Keske-Soares, Márcia

    2014-01-01

    To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Pathology and Audiology Sciences. The keywords "ultrasound," "ultrasonography," "swallow," "orofacial myofunctional therapy," and "orofacial myology" were also used in the search. Studies in humans from the past 5 years were selected. In the preselection, duplicated studies, articles not fully available, and those that did not present direct relation between ultrasound and Speech Language Pathology and Audiology Sciences were discarded. The data were analyzed descriptively and classified subareas of Speech Language Pathology and Audiology Sciences. The following items were considered: purposes, participants, procedures, and results. We selected 12 articles for ultrasound versus speech/phonetics subarea, 5 for ultrasound versus voice, 1 for ultrasound versus muscles of mastication, and 10 for ultrasound versus swallow. Studies relating "ultrasound" and "Speech Language Pathology and Audiology Sciences" in the past 5 years were not found. Different studies on the use of ultrasound in Speech Language Pathology and Audiology Sciences were found. Each of them, according to its purpose, confirms new possibilities of the use of this instrument in the several subareas, aiming at a more accurate diagnosis and new evaluative and therapeutic possibilities.

  14. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor

    Directory of Open Access Journals (Sweden)

    Hiroshi Saruwatari

    2007-01-01

    Full Text Available We present the use of stethoscope and silicon NAM (nonaudible murmur microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible speech, but also very quietly uttered speech (nonaudible murmur. As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc. for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a 93.9% word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.

  15. Attention mechanisms and the mosaic evolution of speech

    Directory of Open Access Journals (Sweden)

    Pedro Tiago Martins

    2014-12-01

    Full Text Available There is still no categorical answer for why humans, and no other species, have speech, or why speech is the way it is. Several purely anatomical arguments have been put forward, but they have been shown to be false, biologically implausible, or of limited scope. This perspective paper supports the idea that evolutionary theories of speech could benefit from a focus on the cognitive mechanisms that make speech possible, for which antecedents in evolutionary history and brain correlates can be found. This type of approach is part of a very recent, but rapidly growing tradition, which has provided crucial insights on the nature of human speech by focusing on the biological bases of vocal learning. Here, we call attention to what might be an important ingredient for speech. We contend that a general mechanism of attention, which manifests itself not only in visual but also auditory (and possibly other modalities, might be one of the key pieces of human speech, in addition to the mechanisms underlying vocal learning, and the pairing of facial gestures with vocalic units.

  16. Speech recognition systems on the Cell Broadband Engine

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Jones, H; Vaidya, S; Perrone, M; Tydlitat, B; Nanda, A

    2007-04-20

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousands of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.

  17. Charisma in business speeches

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Brem, Alexander; Novák-Tót, Eszter

    2016-01-01

    to business speeches. Consistent with the public opinion, our findings are indicative of Steve Jobs being a more charismatic speaker than Mark Zuckerberg. Beyond previous studies, our data suggest that rhythm and emphatic accentuation are also involved in conveying charisma. Furthermore, the differences...... between Steve Jobs and Mark Zuckerberg and the investor- and customer-related sections of their speeches support the modern understanding of charisma as a gradual, multiparametric, and context-sensitive concept....

  18. Defining Disturbance for Microbial Ecology.

    Science.gov (United States)

    Plante, Craig J

    2017-08-01

    Disturbance can profoundly modify the structure of natural communities. However, microbial ecologists' concept of "disturbance" has often deviated from conventional practice. Definitions (or implicit usage) have frequently included climate change and other forms of chronic environmental stress, which contradict the macrobiologist's notion of disturbance as a discrete event that removes biomass. Physical constraints and disparate biological characteristics were compared to ask whether disturbances fundamentally differ in microbial and macroorganismal communities. A definition of "disturbance" for microbial ecologists is proposed that distinguishes from "stress" and other competing terms, and that is in accord with definitions accepted by plant and animal ecologists.

  19. Longitudinal Study of Speech Perception, Speech, and Language for Children with Hearing Loss in an Auditory-Verbal Therapy Program

    Science.gov (United States)

    Dornan, Dimity; Hickson, Louise; Murdoch, Bruce; Houston, Todd

    2009-01-01

    This study examined the speech perception, speech, and language developmental progress of 25 children with hearing loss (mean Pure-Tone Average [PTA] 79.37 dB HL) in an auditory verbal therapy program. Children were tested initially and then 21 months later on a battery of assessments. The speech and language results over time were compared with…

  20. Measures to Evaluate the Effects of DBS on Speech Production

    Science.gov (United States)

    Weismer, Gary; Yunusova, Yana; Bunton, Kate

    2011-01-01

    The purpose of this paper is to review and evaluate measures of speech production that could be used to document effects of Deep Brain Stimulation (DBS) on speech performance, especially in persons with Parkinson disease (PD). A small set of evaluative criteria for these measures is presented first, followed by consideration of several speech physiology and speech acoustic measures that have been studied frequently and reported on in the literature on normal speech production, and speech production affected by neuromotor disorders (dysarthria). Each measure is reviewed and evaluated against the evaluative criteria. Embedded within this review and evaluation is a presentation of new data relating speech motions to speech intelligibility measures in speakers with PD, amyotrophic lateral sclerosis (ALS), and control speakers (CS). These data are used to support the conclusion that at the present time the slope of second formant transitions (F2 slope), an acoustic measure, is well suited to make inferences to speech motion and to predict speech intelligibility. The use of other measures should not be ruled out, however, and we encourage further development of evaluative criteria for speech measures designed to probe the effects of DBS or any treatment with potential effects on speech production and communication skills. PMID:24932066