WorldWideScience

Sample records for audiometry speech

  1. [Speech audiometry, speech perception and cognitive functions. German version].

    Science.gov (United States)

    Meister, H

    2017-03-01

    Examination of cognitive functions in the framework of speech perception has recently gained increasing scientific and clinical interest. Especially against the background of age-related hearing impairment and cognitive decline potential new perspectives in terms of better individualisation of auditory diagnosis and rehabilitation might arise. This review addresses the relationships of speech audiometry, speech perception and cognitive functions. It presents models of speech perception, discusses associations of neuropsychological with audiometric outcomes and shows recent efforts to consider cognitive functions with speech audiometry.

  2. Speech audiometry, speech perception, and cognitive functions : English version.

    Science.gov (United States)

    Meister, H

    2017-01-01

    Examination of cognitive functions in the framework of speech perception has recently gained increasing scientific and clinical interest. Especially against the background of age-related hearing impairment and cognitive decline, potential new perspectives in terms of a better individualization of auditory diagnosis and rehabilitation might arise. This review addresses the relationships between speech audiometry, speech perception, and cognitive functions. It presents models of speech perception, discusses associations of neuropsychological and audiometric outcomes, and shows examples of recent efforts undertaken in Germany to consider cognitive functions with speech audiometry.

  3. [Speech audiometry in expert assessment of hearing impairment].

    Science.gov (United States)

    Batsoulis, C; Lesinski-Schiedat, A

    2017-03-01

    In the assessment process of hearing impairment the medical expert has to verify its causality and to quantify its severity as hearing loss in percentage. Based on the determined hearing loss in percentage, the degree of impairment/disability or, in the case of work-related noise-induced hearing loss, the reduction in earning capacity is estimated. In Germany the guideline for the expert assessment of work-related noise-induced hearing loss is the Königstein Guideline. Currently, the 5th edition from 2012 is used. Here, the hearing loss quantification depends mainly on the results of speech audiometry. Based on the Freiburg speech test, the hearing loss in percentage is determined using approved tables. For patients with a mild hearing loss, typically characterized by a high-frequency hearing loss, tone audiometry results are consulted additionally. Speech-in-noise tests are available and are frequently used to measure the benefit of hearing systems. They allow for the detection of these patients' hearing impairment, which generally occurs in noisy environments. The first suggestions for a table to determine hearing loss in noise in percent are available. In experimental studies it was shown that tests in quiet, other than the Freiburg speech test, can be used and the same tables can be applied. In this article the current use of speech audiometry for expert assessment is presented, and options of using further developed speech test material are discussed.

  4. [The use of speech audiometry in the practice of the geriatric center].

    Science.gov (United States)

    Boboshko, M Yu; Zhilinskaya, E V; Golovanova, L E; Legostaeva, T V; Di Berardino, F; Cesarani, A

    2016-01-01

    The aim of the study was to evaluate a new test of speech audiometry while examining aged patients. 32 aged listeners from 60 to 88 years old were examined: 20 hearing aid (HA) users and 12 patients with normal hearing thresholds and mild cognitive impairment according to the results of the mini-mental state examination (MMSE). The speech audiometry consisted of the traditional polysyllabic words discrimination test and a new speech test with motor responses (Verbal Tasks and Motor Responses - VTMR); in both tests the signal was presented in background noise (polyphony) in free field. All listeners performed the VTMR test significantly better than the polysyllabic words discrimination test. In the group of hearing impaired patients the mean result in VTMR test was 73,2±29,2 % without HA and 88,6±20,5 % with it, in traditional test - 34,8±20,9 % without HA and 56±18,4 % with it. All patients of the group with normal hearing and mild cognitive impairment performed the VTMR test with 100 % result, their speech discrimination score in traditional test was 88±12 %. In the practice of the geriatric center the use of both the traditional speech audiometry and the new speech test with motor responses seems to be reasonable, that allows examining the auditory function in patients with significant deterioration of speech intelligibility or cognitive impairment.

  5. VTMR, a new speech audiometry test with verbal tasks and motor responses.

    Science.gov (United States)

    Di Berardino, Federica; Forti, Stella; Cesarani, Antonio

    2012-04-01

    The aim of this study was to design a complementary speech audiometry test using verbal tasks and motor responses (VTMR) to assess the ability of a subject to understand and perform simple motor tasks with 3-dimensional objects, to describe its construction, and to show the preliminary results of a pilot study on the Italian version of the test. The items used in the test setting included 1 base, 1 hammer, 1 wooden structure with 4 sticks, and 5 rings of different colors and 20 lists with 5 verbal tasks per list. The VTMR test and bisyllabic speech audiometry were evaluated in normal-hearing subjects with and without cognitive impairment and in subjects with sensorineural hearing loss. All normal-hearing subjects without cognitive impairment performed the VTMR tasks (100%) correctly at 35 dB sound pressure level. In subjects with sensorineural hearing loss, the percentage of correct answers was significantly higher for the VTMR test than for bisyllabic speech audiometry above 50 dB sound pressure level. This percentage was higher for the VTMR also in normal-hearing subjects with poor cognitive skills. The VTMR might make it easier to check patients' ability to understand verbal commands than does traditional speech audiometry, in particular in those patients with poor test-taking skills.

  6. Validating self-reporting of hearing-related symptoms against pure-tone audiometry, otoacoustic emission, and speech audiometry.

    Science.gov (United States)

    Fredriksson, Sofie; Hammar, Oscar; Magnusson, Lennart; Kähäri, Kim; Persson Waye, Kerstin

    2016-08-01

    To validate self-reported hearing-related symptoms among personnel exposed to moderately high occupational noise levels at an obstetrics clinic. Sensitivity, specificity, and predictive values were calculated for questionnaire items assessing hearing loss, tinnitus, sound sensitivity, poor hearing, difficulty perceiving speech, and sound-induced auditory fatigue. Hearing disorder was diagnosed by pure-tone audiometry, distortion product otoacoustic emissions, and HINT (Hearing In Noise Test). Fifty-five female obstetrics personnel aged 22-63 participated; including 26 subjects reporting hearing loss, poor hearing, tinnitus, or sound sensitivity, and 29 randomly selected subjects who did not report these symptoms. The questionnaire item assessing sound-induced auditory fatigue had the best combination of sensitivity ≥85% (95% CIs 56 to 100%) and specificity ≥70% (95% CIs 55 to 84%) for hearing disorder diagnosed by audiometry or otoacoustic emission. Of those reporting sound-induced auditory fatigue 71% were predicted to have disorder diagnosed by otoacoustic emission. Participants reporting any hearing-related symptom had slightly worse measured hearing. We suggest including sound-induced auditory fatigue in questionnaires for identification of hearing disorder among healthcare personnel, though larger studies are warranted for precise estimates of diagnostic performance. Also, more specific and accurate hearing tests are needed to diagnose mild hearing disorder.

  7. Speech audiometry findings from HIV+ and HIV- adults in the MACS and WIHS longitudinal cohort studies.

    Science.gov (United States)

    Torre, Peter; Hoffman, Howard J; Springer, Gayle; Cox, Christopher; Young, Mary A; Margolick, Joseph B; Plankey, Michael

    The purpose of this study was to compare various speech audiometry measures between HIV+ and HIV- adults and to further evaluate the association between speech audiometry and HIV disease variables in HIV+ adults only. Three hundred ninety-six adults from the Multicenter AIDS Cohort Study (MACS) and Women's Interagency HIV Study (WIHS) completed speech audiometry testing. There were 262 men, of whom 117 (44.7%) were HIV+, and 134 women, of whom 105 (78.4%) were HIV+. Speech audiometry was conducted as part of the standard clinical audiological evaluation that included otoscopy, tympanometry, and pure-tone air- and bone-conduction thresholds. Specific speech audiometry measures included speech recognition thresholds (SRT) and word recognition scores in quiet presented at 40dB sensation level (SL) in reference to the SRT. SRT data were categorized in 5-dB steps from 0 to 25dB hearing level (HL) with one category as ≥30dB HL while word recognition scores were categorized as <90%, 90-99%, and 100%. A generalized estimating equations model was used to evaluate the association between HIV status and both ordinal outcomes. The SRT distributions across HIV+ and HIV- adults were similar. HIV+ and HIV- adults had a similar percentages of word recognition scores <90%, a lower percentage of HIV- adults had 90-99%, but HIV- adults had a higher percentage of 100%. After adjusting for covariables, HIV+ adults were borderline significantly more likely to have a higher SRT than HIV- adults (odds ratio [OR]=1.45, p=0.06). Among HIV+ adults, HIV-related variables (i.e., CD4+ T-cell counts, HIV viral load, and ever history of clinical AIDS) were not significantly associated with either SRT or word recognition score data. There was, however, a ceiling effect for word recognition scores, probably the result of obtaining this measure in quiet with a relatively high presentation level. A more complex listening task, such as speech-in-noise testing, may be a more clinically informative

  8. Speech audiometry in Estonia: Estonian words in noise (EWIN) test.

    Science.gov (United States)

    Veispak, Anneli; Jansen, Sofie; Ghesquière, Pol; Wouters, Jan

    2015-08-01

    Currently, there is no up-to-date speech perception test available in the Estonian language that may be used to diagnose hearing loss and quantify speech intelligibility. Therefore, based on the example of the Nederlandse Vereniging voor Audiologie (NVA)-lists ( Bosman, 1989 ; Wouters et al, 1994 ) an Estonian words in noise (EWIN) test has been developed. Two experimental steps were carried out: (1) selection and perceptual optimization of the monosyllables, and (2) construction of 14 lists and an evaluation in normal hearing (NH) subjects both in noise and in quiet. Thirty-six normal-hearing (NH) native speakers of Estonia (age range from 17 to 46 years). The reference psychometric curve for NH subjects was determined, with the slope and speech reception threshold being well in accordance with the respective values of the NVA lists. The 14 lists in noise yielded equivalent scores with high precision. The EWIN test is a reliable and valid speech intelligibility test, and is the first of its kind in the Estonian language.

  9. Phoneme and Word Scoring in Speech-in-Noise Audiometry.

    Science.gov (United States)

    Billings, Curtis J; Penman, Tina M; Ellis, Emily M; Baltzell, Lucas S; McMillan, Garnett P

    2016-03-01

    Understanding speech in background noise is difficult for many individuals; however, time constraints have limited its inclusion in the clinical audiology assessment battery. Phoneme scoring of words has been suggested as a method of reducing test time and variability. The purposes of this study were to establish a phoneme scoring rubric and use it in testing phoneme and word perception in noise in older individuals and individuals with hearing impairment. Words were presented to 3 participant groups at 80 dB in speech-shaped noise at 7 signal-to-noise ratios (-10 to 35 dB). Responses were scored for words and phonemes correct. It was not surprising to find that phoneme scores were up to about 30% better than word scores. Word scoring resulted in larger hearing loss effect sizes than phoneme scoring, whereas scoring method did not significantly modify age effect sizes. There were significant effects of hearing loss and some limited effects of age; age effect sizes of about 3 dB and hearing loss effect sizes of more than 10 dB were found. Hearing loss is the major factor affecting word and phoneme recognition with a subtle contribution of age. Phoneme scoring may provide several advantages over word scoring. A set of recommended phoneme scoring guidelines is provided.

  10. [The speech audiometry using the matrix sentence test].

    Science.gov (United States)

    Boboshko, M Yu; Zhilinskaia, E V; Warzybok, A; Maltseva, N V; Zokoll, M; Kollmeier, B

    The matrix sentence test in which the five-word semantically unpredictable sentences presented under the background noise conditions are used as the speech material was designed and validated for many languages. The objective of the present study was to evaluate the Russian version of the matrix sentence test (RuMatrix test) in the listeners of different ages with normal hearing. At the first stage of the study, 35 listeners at the age from 18 to 33 year were examined. The results of the estimation of the training effect dictated the necessity of conducting two training tracks before carrying out the RuMatrix test proper. The signal-to-noise ratio at which 50% speech recognition (SRT50) was obtained was found to be -8.8±0.8 dB SNR. A significant effect of exposure to the background noise was demonstrated: the noise level of 80 and 75 Db SPL led to a considerably lower intelligibility than the noise levels in the range from 45 to 70 dB SPL; in the subsequent studies, the noise level of 65 dB SPL was used. The high test-retest reliability of the RuMatrix test was proved. At the second stage of the study, 20 young (20-40 year old) listeners and 20 aged (62-74 year old) ones were examined. The mean SRT50 in the aged patients was found to be -6.9±1.1 dB SNR which was much worse than the mean STR50 in the young subjects (-8.7±0.9 dB SNR). It is concluded that, bearing in mind the excellent comparability of the results of the RUMat rix test across different languages, it can be used as a universal tool in international research projects.

  11. Effects of chronic noise exposure on speech-in-noise perception in the presence of normal audiometry.

    Science.gov (United States)

    Hope, A J; Luxon, L M; Bamiou, D-E

    2013-03-01

    To assess auditory processing in noise-exposed subjects with normal audiograms and compare the findings with those of non-noise-exposed normal controls. Ten noise-exposed Royal Air Force aircrew pilots were compared with 10 Royal Air Force administrators who had no history of noise exposure. Participants were matched in terms of age and sex. The subjects were assessed in terms of: pure tone audiometry, transient evoked otoacoustic emissions, suppression of transient evoked otoacoustic emissions in contralateral noise and auditory processing task performance (i.e. masking, frequency discrimination, auditory attention and speech-in-noise). All subjects had normal pure tone audiometry and transient evoked otoacoustic emissions amplitudes in both ears. The noise-exposed aircrew had similar pure tone audiometry thresholds to controls, but right ear transient evoked otoacoustic emissions were larger and speech-in-noise thresholds were elevated in the noise-exposed subjects compared to controls. The finding of poorer speech-in-noise perception may reflect noise-related impairment of auditory processing in retrocochlear pathways. Audiometry may not detect early, significant noise-induced hearing impairment.

  12. [Implementation of the new quality assurance agreement for the fitting of hearing aids in daily practice. Part 2: New diagnostic aspects of speech audiometry].

    Science.gov (United States)

    Löhler, J; Akcicek, B; Wollenberg, B; Schönweiler, R

    2014-09-01

    Upon review of the statutory health insurance reimbursement guidelines, a specific quality assurance questionnaire concerned with the provision of hearing aids was introduced that assesses elements of patient satisfaction within Germany's public healthcare system. APHAB questionnaire-based patient evaluation of the benefit of hearing aids represents the third pillar of audiological diagnostics, alongside classical pure-tone and speech audiometry. Another new aspect of the national guidelines is inclusion of free-field measurements in noise with and without hearing aids. Part 2 of this review describes new diagnostic aspects of speech audiometry. In addition to adaptive speech audiometry, a proposed method for applying the gold standard of speech audiometry - the Freiburg monosyllabic speech test - in noise is described. Finally, the quality assurance questionnaire will be explained as an appendix to template 15 of the regulations governing hearing aids.

  13. Validation of the French-language version of the OTOSPEECH automated scoring software package for speech audiometry.

    Science.gov (United States)

    Venail, F; Legris, E; Vaerenberg, B; Puel, J-L; Govaerts, P J; Ceccato, J C

    2016-04-01

    To validate a novel speech audiometry method using customized self-voice recorded word lists with automated scoring. The self-voice effect was investigated by comparing results with prerecorded or self-recorded CVC (consonant-vowel-consonant) word lists. Then customized lists of 3-phoneme words were drawn up using the OTOSPEECH software package, and their scores were compared to those for reference lists. Finally, the customized list scores were compared on automated (Dynamic Time Warping [DTW]) versus manual scoring. Self-voice did not change scores for perception of CVC words at 10, 20 and 30 dB (ANOVA>0.05). Scores obtained with pre-recorded and self-recorded lists correlated (n=10, R(2)=0.76, Paudiometry displayed results similar to conventional audiometric techniques. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  14. Speech audiometry and data logging in CI patients : Implications for adequate test levels.

    Science.gov (United States)

    Hey, M; Hocke, T; Ambrosch, P

    2017-11-08

    As part of postoperative cochlear implant (CI) diagnostics, speech comprehension tests are performed to monitor audiological outcome. In recent years, a trend toward improved suprathreshold speech intelligibility in quiet and an extension of intelligibility to softer sounds has been observed. Parallel to audiometric data, analysis of the patients' acoustic environment can take place by means of data logging in modern CI systems. Which speech test levels reflect the individual listening environment in a relevant manner and how can these be reflected in a clinical audiometric setting? In a retrospective analysis, data logs of 263 adult CI patients were evaluated for sound level and the listening situation (quiet, speech in quiet, noise, speech in noise, music, and wind). Additionally, monosyllabic word comprehension in quiet was analyzed in experienced CI users at presentation levels of 40-80 dB. For the sound level in the acoustic environment of postlingually deafened adult CI users, data logging shows a maximum occurrence of speech signals in the range of 50-59 dB. This demonstrates the relevance of everyday speech comprehension at levels below 60 dB. Individual optimization of speech intelligibility with a CI speech processor should not be performed in the range of 65-70 dB only, but also at lower levels. Measurements at 50 dB currently seem to be a useful addition.

  15. [Speech audiometry and data logging in CI patients : Implications for adequate test levels. German version].

    Science.gov (United States)

    Hey, M; Hocke, T; Ambrosch, P

    2017-10-06

    As part of postoperative cochlear implant (CI) diagnostics, speech comprehension tests are performed to monitor audiological outcome. In recent years, a trend toward improved suprathreshold speech intelligibility in quiet and an extension of intelligibility to softer sounds has been observed. Parallel to audiometric data, analysis of the patients' acoustic environment can take place by means of data logging in modern CI systems. Which test levels reflect the individual listening environment in a relevant manner and how can these be reflected in a clinical audiometric setting? In a retrospective analysis, data logs of 263 adult CI patients were evaluated for sound level and the listening situation (quiet, speech in quiet, noise, speech in noise, music, and wind). Additionally, monosyllabic word comprehension in quiet was analyzed in experienced CI users at presentation levels of 40-80 dB. For the sound level in the acoustic environment of postlingually deafened adult CI users, data logging shows a maximum occurrence of speech signals in the range 50-59 dB. This demonstrates the relevance of everyday speech comprehension at levels below 60 dB. Individual optimization of speech intelligibility with a CI speech processor should not be performed in the range of 65-70 dB only, but also at lower levels. Measurements at 50 dB currently seem to be a useful addition.

  16. [The age effect in evaluation of hearing aid benefits by speech audiometry].

    Science.gov (United States)

    Müller, A; Hocke, T; Hoppe, U; Mir-Salim, P

    2016-03-01

    Hearing loss is one of the most common disabilities in the elderly. The aim of this study was to investigate the relationship between pure-tone hearing loss and maximum monosyllabic perception and speech perception with hearing aids. The focus of the investigation was elderly patients. In this prospective study, 188 patients with sensorineural hearing loss were included. The pure-tone audiogram (4FPTA), the Freiburg speech intelligibility test with headphones and the word recognition score with hearing aids at 65 dB SPL were measured and evaluated. An increasing age was associated with higher discrepancy between the maximum speech perception and speech understanding with hearing aids. The mean difference between maximum monosyllabic perception and speech perception with hearing aids is about 20% in the elderly population. The intended goal of hearing aid prescription, the match between maximum monosyllabic perception and word recognition score with hearing aids within 5 to 10%, is not achieved in the elderly population.

  17. Noise audiometry.

    Science.gov (United States)

    1971-01-01

    The displacement of a threshold from its measured-in-the-quiet value to the value it takes in the presence of another sound is masking. Measurement of that displacement is masking audiometry. And the measurement of displacements at a large number of ...

  18. Audiometry screening and interpretation.

    Science.gov (United States)

    Walker, Jennifer Junnila; Cleveland, Leanne M; Davis, Jenny L; Seales, Jennifer S

    2013-01-01

    The prevalence of hearing loss varies with age, affecting at least 25 percent of patients older than 50 years and more than 50 percent of those older than 80 years. Adolescents and young adults represent groups in which the prevalence of hearing loss is increasing and may therefore benefit from screening. If offered, screening can be performed periodically by asking the patient or family if there are perceived hearing problems, or by using clinical office tests such as whispered voice, finger rub, or audiometry. Audiometry in the family medicine clinic setting is a relatively simple procedure that can be interpreted by a trained health care professional. Pure-tone testing presents tones across the speech spectrum (500 to 4,000 Hz) to determine if the patient's hearing levels fall within normal limits. A quiet testing environment, calibrated audiometric equipment, and appropriately trained personnel are required for in-office testing. Pure-tone audiometry may help physicians appropriately refer patients to an audiologist or otolaryngologist. Unilateral or asymmetrical hearing loss can be symptomatic of a central nervous system lesion and requires additional evaluation.

  19. [Conventional audiometry versus cochlear microphone audiometry].

    Science.gov (United States)

    Sanjuán Juaristi, Julio

    2007-04-01

    The recording and processing of cochlear microphone potentials in hearing studies is currently in the definitive validation phase against results obtained with other objective procedures. The purpose of this work is to contribute to its validation. The equipment used was exclusively designed for recording cochlear microphone potentials. The study has been carried out in adults to compare subjective audiometric results with those obtained from cochlear microphones. We present a statistical concordance study between subjective audiometry and cochlear microphone audiometry. In view of the results obtained, this method is particularly valid for early diagnosis. We obtained an identical profile to the subjective audiogram at audiometric frequencies of 250, 500, 1000, 2000, and 4000 Hz.

  20. Audiometry screening and interpretation

    National Research Council Canada - National Science Library

    Walker, Jennifer Junnila; Cleveland, Leanne M; Davis, Jenny L; Seales, Jennifer S

    2013-01-01

    .... If offered, screening can be performed periodically by asking the patient or family if there are perceived hearing problems, or by using clinical office tests such as whispered voice, finger rub, or audiometry...

  1. A web-based audiometry database system

    OpenAIRE

    Yeh, Chung-Hui; Wei, Sung-Tai; Chen, Tsung-Wen; Wang, Ching-Yuang; Tsai, Ming-Hsui; Lin, Chia-Der

    2014-01-01

    To establish a real-time, web-based, customized audiometry database system, we worked in cooperation with the departments of medical records, information technology, and otorhinolaryngology at our hospital. This system includes an audiometry data entry system, retrieval and display system, patient information incorporation system, audiometry data transmission program, and audiometry data integration. Compared with commercial audiometry systems and traditional hand-drawn audiometry data, this ...

  2. High-frequency audiometry: a means for early diagnosis of noise-induced hearing loss.

    Science.gov (United States)

    Mehrparvar, Amir H; Mirmohammadi, Seyyed J; Ghoreyshi, Abbas; Mollasadeghi, Abolfazl; Loukzadeh, Ziba

    2011-01-01

    Noise-induced hearing loss (NIHL), an irreversible disorder, is a common problem in industrial settings. Early diagnosis of NIHL can help prevent the progression of hearing loss, especially in speech frequencies. For early diagnosis of NIHL, audiometry is performed routinely in conventional frequencies. We designed this study to compare the effect of noise on high-frequency audiometry (HFA) and conventional audiometry. In a historical cohort study, we compared hearing threshold and prevalence of hearing loss in conventional and high frequencies of audiometry among textile workers divided into two groups: With and without exposure to noise more than 85 dB. The highest hearing threshold was observed at 4000 Hz, 6000 Hz and 16000 Hz in conventional right ear audiometry, conventional left ear audiometry and HFA in each ear, respectively. The hearing threshold was significantly higher at 16000 Hz compared to 4000. Hearing loss was more common in HFA than conventional audiometry. HFA is more sensitive to detect NIHL than conventional audiometry. It can be useful for early diagnosis of hearing sensitivity to noise, and thus preventing hearing loss in lower frequencies especially speech frequencies.

  3. Occupational hearing loss: tonal audiometry X high frequencies audiometry

    OpenAIRE

    Lauris, José Roberto Pereira; Basso, Talita Costa; Marinelli, Érica Juliana Innocenti; Otubo, Karina Aki; Lopes, Andréa Cintra

    2009-01-01

    Introduction: Studies on the occupational exposure show that noise has been reaching a large part of the working population around the world, and NIHL (noise-induced hearing loss) is the second most frequent disease of the hearing system. Objective: To review the audiometry results of employees at the campus of the University of São Paulo, Bauru. Method: 40 audiometry results were analyzed between 2007 and 2008, whose ages comprised between 32 and 59 years, of both sexes and several professio...

  4. Occupational hearing loss: tonal audiometry X high frequencies audiometry

    Directory of Open Access Journals (Sweden)

    Lauris, José Roberto Pereira

    2009-09-01

    Full Text Available Introduction: Studies on the occupational exposure show that noise has been reaching a large part of the working population around the world, and NIHL (noise-induced hearing loss is the second most frequent disease of the hearing system. Objective: To review the audiometry results of employees at the campus of the University of São Paulo, Bauru. Method: 40 audiometry results were analyzed between 2007 and 2008, whose ages comprised between 32 and 59 years, of both sexes and several professions: gardeners, maintenance technicians, drivers etc. The participants were divided into 2 groups: those with tonal thresholds within acceptable thresholds and those who presented auditory thresholds alterations, that is tonal thresholds below 25 dB (NA in any frequency (Administrative Rule no. 19 of the Ministry of Labor 1998. In addition to the Conventional Audiologic Evaluation (250Hz to 8.000Hz we also carried out High Frequencies Audiometry (9000Hz, 10000Hz, 11200Hz, 12500Hz, 14000Hz and 16000Hz. Results: According to the classification proposed by FIORINI (1994, 25.0% (N=10 they presented with NIHL suggestive audiometric configurations. The results of high frequencies Audiometry confirmed worse thresholds than those obtained in the conventional audiometry in the 2 groups evaluated. Conclusion: The use of high frequencies audiometry proved to be an important register as a hearing alteration early detection method.

  5. [The value of impedance audiometry in the hearing loss diagnosis].

    Science.gov (United States)

    Kowalska, Sylwia; Konopka, Wiesław; Słomińska, Renata; Olszewski, Jurek

    2008-01-01

    The aim of this work was to assess the value of impedance audiometry in the differential diagnostics of hearing disorders, especially in patients suffering from tinnitus. The analysis dealt with results of the audiological tests in 198 patients (116 female and 82 male), hospitalised in 2007 due to their hearing deterioration, tinnitus or sudden deafness. The conducted audiological tests covered threshold and suprathreshold pure tone audiometry, speech audiometry, BERA and impedance audiometry. RESULTS OF THE STUDIES: Women (58.5%) and people over 50 years old (58.6%) constituted the majority of the patients. In 166 (83.8%) patients the conducted tests via impedance audiometry did not prove any deviations from the normal condition, the lesions referred to both ears in 32 (16.9%) patients and one ear in 17 (8.5%) patients. An incorrect tympanogram was found in 23 people, including type As in 11, type Ad in 2, type B in 4 and type C in 6 subjects. Low values of acoustic receptivity of the middle ear were noted in 20 ears, whereas high values in 11 ears. In 3 ears we found low values of the gradient (below 0.3), high values--in 11 ears. The middle ear pressure between -170 and -350 daPa was noticed in 20 ears, and positive values, above +50 daPa up to +75 daPa, in 3 ears. Disorders in the stapedial reflex registration were observed in 38 (19.1%) patients. The assessment of the conducted subjective and objective audiological examinations allowed to recognise bilateral perceptive hearing injuries in 139 patients, including 49 (25.9%) of cochlear origin with OWG, in further 70 patients the hearing loss referred to higher frequencies and was rather slight. The own experiences indicated that the impedance audiometry constitutes the integral part of contemporary audiological diagnostics and still remains an objective method facilitating quick, non-invasive evaluation of the functions of particular elements in the middle ear.

  6. A web-based audiometry database system.

    Science.gov (United States)

    Yeh, Chung-Hui; Wei, Sung-Tai; Chen, Tsung-Wen; Wang, Ching-Yuang; Tsai, Ming-Hsui; Lin, Chia-Der

    2014-07-01

    To establish a real-time, web-based, customized audiometry database system, we worked in cooperation with the departments of medical records, information technology, and otorhinolaryngology at our hospital. This system includes an audiometry data entry system, retrieval and display system, patient information incorporation system, audiometry data transmission program, and audiometry data integration. Compared with commercial audiometry systems and traditional hand-drawn audiometry data, this web-based system saves time and money and is convenient for statistics research. Copyright © 2013. Published by Elsevier B.V.

  7. Extended high-frequency audiometry and noise induced hearing loss in cement workers.

    Science.gov (United States)

    Somma, Giuseppina; Pietroiusti, Antonio; Magrini, Andrea; Coppeta, Luca; Ancona, Carla; Gardi, Stefano; Messina, Marco; Bergamaschi, Antonio

    2008-06-01

    It has been suggested that extended high-frequency audiometry (EHFA) might be more sensitive than conventional audiometry in detecting early signs of hearing impairment. However, this technique has not been adequately tested in an occupational environment. We therefore investigated the usefulness of this method in noise-exposed workers. We compared conventional frequency audiometry (0.25-8 kHz) and EHFA (9-18 kHz) in 184 noise-exposed and 98 non-noise-exposed workers. Both methods showed significantly higher threshold levels (P audiometry. Stepwise regression analysis showed that in 21- to 40-year-old workers the noise effect was largely predominant at both conventional audiometry and EHFA, whereas in older subjects the noise effect was predominant up to 6 kHz frequency, the effect of age being significantly greater at higher frequencies. These data indicate that EHFA is more sensitive than conventional audiometry in detecting noise induced hearing loss. However, hearing loss in the EHF range seems an age-dependent phenomenon with progression into the lower speech range frequencies with increasing age. These changes seem to be accentuated in the early years by noise exposure, suggesting that EHFA could represent a useful preventive measure in young exposed workers. Copyright 2008 Wiley-Liss, Inc.

  8. Audiometry for the Retarded: With Implications for the Difficult-to-Test.

    Science.gov (United States)

    Fulton, Robert T., Ed.; And Others

    Directed to professionals with a basic knowledge of audiological principles, the text presents a review of audiological assessment procedures and their applicability to the retarded. Pure-tone, speech, and Bekesy audiometry are described. Also discussed are differential diagnosis of auditory impairments, conditioning and audiological assessment,…

  9. Objective Audiometry using Ear-EEG

    DEFF Research Database (Denmark)

    Christensen, Christian Bech; Kidmose, Preben

    therefore be an enabling technology for objective audiometry out of the clinic, allowing regularly fitting of the hearing aids to be made by the users in their everyday life environment. The objective of this study is to investigate the application of ear-EEG in objective audiometry....

  10. ABR Audiometry in Cornelia De Lange Syndrome.

    Science.gov (United States)

    Brown, Denice P.

    Eight children (ages 13 days to 5 years) with a diagnosis of Cornelia de Lange syndrome received audiologic evaluation consisting of immittance audiometry and auditory brainstem response audiometry to air and bone conducted "click" stimuli, as behavioral testing was unreliable due to patient age and/or developmental delay. Developmental…

  11. Objective Audiometry using Ear-EEG

    DEFF Research Database (Denmark)

    Christensen, Christian Bech; Kidmose, Preben

    life. Ear-EEG may therefore be an enabling technology for objective audiometry out of the clinic, allowing regularly fitting of the hearing aids to be made by the users in their everyday life environment. In this study we investigate the application of ear-EEG in objective audiometry....

  12. Audiometry

    Science.gov (United States)

    ... It may also be used when you have hearing problems from any cause. Common causes of hearing loss ... Editorial team. Hearing Disorders and Deafness Read more Hearing Problems in Children Read more Latest Health News Read ...

  13. The instructional effectiveness of a web-based audiometry simulator.

    Science.gov (United States)

    Lieberth, Ann K; Martin, Douglas R

    2005-02-01

    With distance learning becoming more of a reality than a novelty in many undergraduate and graduate training programs, web-based clinical simulations can be identified as an instructional option in distance education that has both a sound pedagogical foundation and clinical relevance. The purpose of this article is to report on the instructional effectiveness of a web-based pure-tone audiometry simulator by undergraduate and graduate students in speech-language pathology. Graduate and undergraduate majors in communication sciences and disorders practiced giving basic hearing tests on either a virtual web-based audiometer or a portable audiometer. Competencies in basic testing skills were evaluated for each group. Results of our analyses of the data indicate that both undergraduate and graduate students learned basic audiometric testing skills using the virtual audiometer. These skills were generalized to basic audiometric testing skills required of a speech language pathologist using a portable audiometer.

  14. PC-based tele-audiometry.

    Science.gov (United States)

    Choi, Jong Min; Lee, Haet Bit; Park, Cheol Soo; Oh, Seung Ha; Park, Kwang Suk

    2007-10-01

    A personal computer (PC)-based audiometer was developed for interactive remote audiometry. This paper describes a tele-audiometric system and evaluates the performance of the device when compared with conventional face-to-face audiometry. The tele-audiometric system is fully PC-based. A sound card featuring a high-quality digital-to-analog converter is used as a pure-tone generator. The audiometric programs were developed based on Microsoft Windows in order to maximize usability. Audiologists and their subjects can use the tele-audiometry system as one would utilize any PC application. A calibration procedure has been applied for the standardization of sound levels in the remote system. The performance of this system was evaluated by comparing PC-based audiometry with the conventional clinical audiometry system for 37 subjects. Also, performance of the PC-based system was evaluated during use at a remote site. The PC-based audiometry system estimated the audiometric threshold with an error of less than 2.3 dBSPL. Only 10.7% of the results exhibited an error greater than 5 dBSPL during use at a remote site. The PC-based tele-audiomerty showed acceptable results for use at a remote site. This PC-based system can be used effectively and easily in many locations that have Internet access but no local audiologists.

  15. Conventional Audiometry, Extended High-Frequency Audiometry, and DPOAE for Early Diagnosis of NIHL.

    Science.gov (United States)

    Mehrparvar, Amir Houshang; Mirmohammadi, Seyyed Jalil; Davari, Mohammad Hossein; Mostaghaci, Mehrdad; Mollasadeghi, Abolfazl; Bahaloo, Maryam; Hashemi, Seyyed Hesam

    2014-01-01

    Noise most frequently affects hearing system, as it may typically cause a bilateral, progressive sensorineural hearing loss at high frequencies. This study was designed to compare three different methods to evaluate noise-induced hearing loss (conventional audiometry, high-frequency audiometry, and distortion product otoacoustic emission). This was a cross-sectional study. Data was analyzed by SPSS (ver. 19) using chi square, T test and repeated measures analysis. Study samples were workers from tile and ceramic industry. We found that conventional audiometry, extended high-frequency audiometry, low-tone distortion product otoacoustic emission and high-tone distortion product otoacoustic emission had abnormal findings in 29 %, 69 %, 22 %, and 52 % of participants. Most frequently affected frequencies were 4000 and 6000Hz in conventional audiometry, and 14000 and 16000 in extended high-frequency audiometry. Extended high-frequency audiometry was the most sensitive test for detection of hearing loss in workers exposed to hazardous noise compared with conventional audiometry and distortion product otoacoustic.

  16. Nonorganic hearing loss in children: audiometry, clinical characteristics, biographical history and recovery of hearing thresholds.

    Science.gov (United States)

    Schmidt, Claus-Michael; am Zehnhoff-Dinnesen, Antoinette; Matulat, Peter; Knief, Arne; Rosslau, Ken; Deuster, Dirk

    2013-07-01

    The term "nonorganic hearing loss" (NOHL) (pseudohypacusis, functional or psychogenic hearing loss) describes a hearing loss without a detectable corresponding pathology in the auditory system. It is characterized by a discrepancy between elevated pure tone audiometry thresholds and normal speech discrimination. The recommended audiological management of NOHL in children comprises history taking, diagnosis, and counseling. According to the literature, prognosis depends on the severity of the patient's school and/or personal problems. Routine referral to a child psychiatrist is discussed as being controversial. The clinical history of 34 children with NOHL was retrospectively evaluated. In 15 children, follow up audiometry was performed. Results of biographical history, subjective and objective audiometry, additional speech and language assessment, psychological investigations and follow up audiometry are presented and discussed. The prevalence of NOHL was 1.8% in children with suspected hearing loss. Mean age at diagnosis was 10.8 years. Girls were twice as often affected as boys. Patient history showed a high prevalence of emotional and school problems. Pre-existing organic hearing loss can be worsened by nonorganic causes. Children with a fast recovery of hearing thresholds (n=6) showed a high rate (4/6) of family, social and emotional problems. In children with continuous threshold elevation (n=9), biographical history showed no recognizable or obvious family, social or emotional problems; learning disability (4/9) was the most frequently presented characteristic. Due to advances in objective audiometry, the diagnosis of NOHL is less challenging than management and counseling. Considering the high frequency of personal and school problems, a multidisciplinary setting is helpful. On the basis of our results, drawing conclusions from hearing threshold recovery on the severity of underlying psychic problems seems inappropriate. As a consequence, a referral to a

  17. Early changes in auditory function as a result of platinum chemotherapy: use of extended high-frequency audiometry and evoked distortion product otoacoustic emissions.

    Science.gov (United States)

    Knight, Kristin R; Kraemer, Dale F; Winter, Christiane; Neuwelt, Edward A

    2007-04-01

    The objective is to describe progressive changes in hearing and cochlear function in children and adolescents treated with platinum-based chemotherapy and to begin preliminary evaluation of the feasibility of extended high-frequency audiometry and distortion product otoacoustic emissions for ototoxicity monitoring in children. Baseline and serial measurement of conventional pure-tone audiometry (0.5 to 8 kHz) and evoked distortion product otoacoustic emissions (DPOAEs) were conducted for 32 patients age 8 months to 20 years who were treated with cisplatin and/or carboplatin chemotherapy. Seventeen children also had baseline and serial measurement of extended high-frequency (EHF) audiometry (9 to 16 kHz). Audiologic data were analyzed to determine the incidence of ototoxicity using the American Speech-Language-Hearing Association criteria, and the relationships between the different measures of ototoxicity. Of the 32 children, 20 (62.5%) acquired bilateral ototoxicity in the conventional frequency range during chemotherapy treatment, and 26 (81.3%) had bilateral decreases in DPOAE amplitudes and dynamic range. Of the 17 children with EHF audiometry results, 16 (94.1%) had bilateral ototoxicity in the EHF range. Pilot data suggest that EHF thresholds and DPOAEs show ototoxic changes before hearing loss is detected by conventional audiometry. EHF audiometry and DPOAEs have the potential to reveal earlier changes in auditory function than conventional frequency audiometry during platinum chemotherapy in children.

  18. The Relevance of the High Frequency Audiometry in Tinnitus Patients with Normal Hearing in Conventional Pure-Tone Audiometry

    OpenAIRE

    Veronika Vielsmeier; Astrid Lehner; Jürgen Strutz; Thomas Steffens; Kreuzer, Peter M.; Martin Schecklmann; Michael Landgrebe; Berthold Langguth; Tobias Kleinjung

    2015-01-01

    Objective. The majority of tinnitus patients suffer from hearing loss. But a subgroup of tinnitus patients show normal hearing thresholds in the conventional pure-tone audiometry (125 Hz–8 kHz). Here we explored whether the results of the high frequency audiometry (>8 kHz) provide relevant additional information in tinnitus patients with normal conventional audiometry by comparing those with normal and pathological high frequency audiometry with respect to their demographic and clinical chara...

  19. Conventional Audiometry, Extended High-Frequency Audiometry, and DPOAE for Early Diagnosis of NIHL

    OpenAIRE

    Mehrparvar, Amir Houshang; Mirmohammadi, Seyyed Jalil; Davari, Mohammad Hossein; Mostaghaci, Mehrdad; Mollasadeghi, Abolfazl; Bahaloo, Maryam; Hashemi, Seyyed Hesam

    2014-01-01

    Background: Noise most frequently affects hearing system, as it may typically cause a bilateral, progressive sensorineural hearing loss at high frequencies. Objectives: This study was designed to compare three different methods to evaluate noise-induced hearing loss (conventional audiometry, high-frequency audiometry, and distortion product otoacoustic emission). Material and Methods: This was a cross-sectional study. Data was analyzed by SPSS (ver. 19) using chi square, T test and repeated m...

  20. Evaluating Behavioural Observation Audiometry with Handicapped Children.

    Science.gov (United States)

    Flexer, Carol; Gans, Donald P.

    1982-01-01

    Three observers evaluated the responses to sound with 21 mild to severely handicapped children (7 months to 10 years old) on Behavioural Observation Audiometry, an alternative to conditioning paradigms in audiometric assessment. Results showed that inter-observer agreement was high and that responsitivity was not affected by stimulus presentation…

  1. The Relevance of the High Frequency Audiometry in Tinnitus Patients with Normal Hearing in Conventional Pure-Tone Audiometry.

    Science.gov (United States)

    Vielsmeier, Veronika; Lehner, Astrid; Strutz, Jürgen; Steffens, Thomas; Kreuzer, Peter M; Schecklmann, Martin; Landgrebe, Michael; Langguth, Berthold; Kleinjung, Tobias

    2015-01-01

    The majority of tinnitus patients suffer from hearing loss. But a subgroup of tinnitus patients show normal hearing thresholds in the conventional pure-tone audiometry (125 Hz-8 kHz). Here we explored whether the results of the high frequency audiometry (>8 kHz) provide relevant additional information in tinnitus patients with normal conventional audiometry by comparing those with normal and pathological high frequency audiometry with respect to their demographic and clinical characteristics. From the database of the Tinnitus Clinic at Regensburg we identified 75 patients with normal hearing thresholds in the conventional pure-tone audiometry. We contrasted these patients with normal and pathological high-frequency audiogram and compared them with respect to gender, age, tinnitus severity, pitch, laterality and duration, comorbid symptoms and triggers for tinnitus onset. Patients with pathological high frequency audiometry were significantly older and had higher scores on the tinnitus questionnaires in comparison to patients with normal high frequency audiometry. Furthermore, there was an association of high frequency audiometry with the laterality of tinnitus. In tinnitus patients with normal pure-tone audiometry the high frequency audiometry provides useful additional information. The association between tinnitus laterality and asymmetry of the high frequency audiometry suggests a potential causal role for the high frequency hearing loss in tinnitus etiopathogenesis.

  2. Self-recording audiometry in industry

    Science.gov (United States)

    Pelmear, P. L.; Hughes, Brenda J.

    1974-01-01

    Pelmear, P. L. and Hughes, Brenda J. (1974).British Journal of Industrial Medicine,31, 304-309. Self-recording audiometry in industry. A study of initial and repeat audiograms of 118 drop forge employees using fixed frequency self-recording audiometry showed that the mean of the differences at the test frequencies 0·5, 1, 2, 3, 4, and 6 kHz ranges from -0·47 dB to +0·61 dB. The largest standard deviation was 6 dB at 6 kHz and the lowest 3 dB at 2 kHz. The results also confirmed that temporary threshold shift effects may be minimized if audiograms are obtained at the beginning of a shift or within two hours provided the subject is protected with ear muff defenders up to the time of the test. The practical advantages to industry of using self-recording audiometry for audiometric screening and the reliability of single audiograms for threshold determination are discussed. PMID:4425632

  3. Understanding Bilingualism and Its Impact on Speech Audiometry.

    Science.gov (United States)

    von Hapsburg, Deborah; Pena, Elizabeth D.

    2002-01-01

    This tutorial reviews auditory research conducted with monolingual and bilingual speakers of Spanish and English. Based on a functional view of bilingualism and on auditory research findings showing that the bilingual experience may affect the outcome of auditory research, it discusses methods for improving descriptions of linguistically diverse…

  4. Electrophysiological Techniques for Sea Lion Population-Level Audiometry

    Science.gov (United States)

    2009-09-30

    Audiometry James J. Finneran Space and Naval Warfare Systems Center Pacific, Biosciences Division, Code 71510, 53560 Hull Street, San Diego, CA...DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Electrophysiological Techniques For Sea Lion Population-Level Audiometry 5a

  5. Collection and analysis of offshore workforce audiometry data

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-05-01

    This report summarises the results of a study analysing audiometry data to determine if noise induced related hearing loss is happening in offshore operations. The background to the study is traced, and details are given of the initial contacts with medical and operational companies holding audiometry data, the confidentiality of the data sources, the questionnaire for the holders of personnel audiometry data, and initial data checking. A descriptive analysis of the study population is presented, and the analysis of audiometry data, hearing threshold levels, and the classification of the data using the Health and Safety Executive (HSE) categorisation scheme are discussed. The questionnaire for the data holders, the audiometry data collection proforma, and guidance for completion of data collection proformas are included in appendices.

  6. The ratio of the subjective audiometry in patients with acoustic trauma and “noisy” production workers

    Directory of Open Access Journals (Sweden)

    Shydlovska T.А.

    2014-11-01

    Full Text Available Introduction: The problem of diagnosis and treatment of sensorineural hearing loss (SHL, including forms developed under the influence of noise, takes one of the leading places in ot¬olaryngology. However, there are not many studies on acoustic trauma, although this problem has recently become more and more important. Objective: A comparison of subjective audiometry in patients with sensorineural hearing loss after acute acoustic trauma and chronic noise exposure. Materials and methods. In the work the results of exa¬mination of 84 patients with acoustic trauma, 15 healthy as the control group and 15 workers employed on 'noise' occupations as a comparison group are given. Subjective audiometry was fully carried out by clinical audiometer AC-40 «Interacoustics» (Denmark. Hearing indices were investigated in the conventional (0,125-8 kHz and extended (9-16 kHz frequency bands. Results: Subjective audiometry showed a reduction in sound perception in all patients. Ac¬cor¬ding to the threshold tone audiometry in patients with acoustic trauma hearing thresholds were authentically (P <0,05 increased by 4, 6 and 8 kHz tones of conventional (0,125-8 kHz frequency band and by 14-16 kHz tones of the extended (9-16 kHz in comparison with the control group, as with workers employed on noise occupations. All the examined patients had deterioration of speech-test audiometry and above-threshold audiometry. Conclusions: According to su¬b¬jective audiometry, the type similar disorders of auditory function are in patients with acoustic trauma as in patients with long-term noise exposure, but they are more pronounced and develop much faster. The most informative features which show the origin and progression of hearing loss in patients with acoustic trauma are: increasing hearing thresholds by 14 and 16 kHz tones of the extended (9-16 kHz frequency band and by 4, 6 and 8 kHz tones of con¬ven¬tional (0,125-8 kHz frequency band plus the reduction of

  7. Automated audiometry using apple iOS-based application technology.

    Science.gov (United States)

    Foulad, Allen; Bui, Peggy; Djalilian, Hamid

    2013-11-01

    The aim of this study is to determine the feasibility of an Apple iOS-based automated hearing testing application and to compare its accuracy with conventional audiometry. Prospective diagnostic study. Setting Academic medical center. An iOS-based software application was developed to perform automated pure-tone hearing testing on the iPhone, iPod touch, and iPad. To assess for device variations and compatibility, preliminary work was performed to compare the standardized sound output (dB) of various Apple device and headset combinations. Forty-two subjects underwent automated iOS-based hearing testing in a sound booth, automated iOS-based hearing testing in a quiet room, and conventional manual audiometry. The maximum difference in sound intensity between various Apple device and headset combinations was 4 dB. On average, 96% (95% confidence interval [CI], 91%-100%) of the threshold values obtained using the automated test in a sound booth were within 10 dB of the corresponding threshold values obtained using conventional audiometry. When the automated test was performed in a quiet room, 94% (95% CI, 87%-100%) of the threshold values were within 10 dB of the threshold values obtained using conventional audiometry. Under standardized testing conditions, 90% of the subjects preferred iOS-based audiometry as opposed to conventional audiometry. Apple iOS-based devices provide a platform for automated air conduction audiometry without requiring extra equipment and yield hearing test results that approach those of conventional audiometry.

  8. Auditory assessment of children with severe hearing loss using behavioural observation audiometry and brainstem evoked response audiometry

    OpenAIRE

    Rakhi Kumari; Priyanko Chakraborty; Jain, R K; Dhananjay Kumar

    2016-01-01

    Background: Early detection of hearing loss has been a long-standing priority in the field of audiology. Currently available auditory testing methods include both behavioural and non-behavioural or objective tests of hearing. This study was planned with an objective to assess hearing loss in children using behavioural observation audiometry and brain stem evoked response audiometry. Methods: A total of 105 cases suffering from severe to profound hearing loss were registered. After proper h...

  9. [The systematic selection of speech audiometric procedures].

    Science.gov (United States)

    Steffens, T

    2017-03-01

    The impact of hearing loss on the ability to participate in verbal communication can be directly quantified through the use of speech audiometry. Advances in technology and the associated reduction in background noise interference for hearing aids have allowed the reproduction of very complex acoustic environments, analogous to those in which conversations occur in daily life. These capabilities have led to the creation of numerous advanced speech audiometry measures, test procedures and environments, far beyond the presentation of isolated words in an otherwise noise-free testing booth. The aim of this study was to develop a set of systematic criteria for the appropriate selection of speech audiometric material, which are presented in this article in relationship to the most widely used test procedures. Before an appropriate speech test can be selected from the numerous procedures available, the precise aims of the evaluation should be basically defined. Specific test characteristics, such as validity, objectivity, reliability and sensitivity are important for the selection of the correct test for the specific goals. A concrete understanding of the goals of the evaluation as well as of specific test criteria play a crucial role in the selection of speech audiometry testing procedures.

  10. Asynchronous interpretation of manual and automated audiometry: Agreement and reliability.

    Science.gov (United States)

    Brennan-Jones, Christopher G; Eikelboom, Robert H; Bennett, Rebecca J; Tao, Karina Fm; Swanepoel, De Wet

    2018-01-01

    Introduction Remote interpretation of automated audiometry offers the potential to enable asynchronous tele-audiology assessment and diagnosis in areas where synchronous tele-audiometry may not be possible or practical. The aim of this study was to compare remote interpretation of manual and automated audiometry. Methods Five audiologists each interpreted manual and automated audiograms obtained from 42 patients. The main outcome variable was the audiologist's recommendation for patient management (which included treatment recommendations, referral or discharge) between the manual and automated audiometry test. Cohen's Kappa and Krippendorff's Alpha were used to calculate and quantify the intra- and inter-observer agreement, respectively, and McNemar's test was used to assess the audiologist-rated accuracy of audiograms. Audiograms were randomised and audiologists were blinded as to whether they were interpreting a manual or automated audiogram. Results Intra-observer agreement was substantial for management outcomes when comparing interpretations for manual and automated audiograms. Inter-observer agreement was moderate between clinicians for determining management decisions when interpreting both manual and automated audiograms. Audiologists were 2.8 times more likely to question the accuracy of an automated audiogram compared to a manual audiogram. Discussion There is a lack of agreement between audiologists when interpreting audiograms, whether recorded with automated or manual audiometry. The main variability in remote audiogram interpretation is likely to be individual clinician variation, rather than automation.

  11. Evoked response audiometry used in testing auditory organs of miners

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, T.; Klepacki, J.; Wagstyl, R.

    1980-01-01

    The evoked response audiometry method of testing hearing loss is presented and the results of comparative studies using subjective tonal audiometry and evoked response audiometry in tests of 56 healthy men with good hearing are discussed. The men were divided into three groups according to age and place of work: work place without increased noise; work place with noise and vibrations (at drilling machines); work place with noise and shocks (work at excavators in surface coal mines). The ERA-MKII audiometer produced by the Medelec-Amplaid firm was used. Audiometric threshhold curves for the three groups of tested men are given. At frequencies of 500, 1000 and 4000 Hz mean objective auditory threshhold was shifted by 4-9.5 dB in comparison to the subjective auditory threshold. (21 refs.) (In Polish)

  12. Hearing assessment-reliability, accuracy, and efficiency of automated audiometry.

    Science.gov (United States)

    Swanepoel, De Wet; Mngemane, Shadrack; Molemong, Silindile; Mkwanazi, Hilda; Tutshini, Sizwe

    2010-06-01

    This study investigated the reliability, accuracy, and time efficiency of automated hearing assessment using a computer-based telemedicine-compliant audiometer. Thirty normal-hearing subjects and eight hearing-impaired subjects were tested with pure-tone air conduction audiometry (125-8,000 Hz) in a manual and automated configuration in a counterbalanced manner. For the normal-hearing group each test was repeated to determine test-retest reliability and recording time, and preference for threshold-seeking method (manual vs. automated) was documented. Test-retest thresholds were not significantly different for manual and automated testing. Manual audiometry test-retest correspondence was 5 dB or less in 88% of thresholds compared to 91% for automated audiometry. Thresholds for automated audiometry did not differ significantly from manual audiometry with 87% of thresholds in the normal-hearing group and 97% in the hearing-impaired group, corresponding within 5 dB or less of each other. The largest overall average absolute difference across frequencies was 3.6 +/- 3.9 dB for the normal-hearing group and 3.3 +/- 2.4 for the hearing-impaired group. Both techniques were equally time efficient in the normal-hearing population, and 63% of subjects preferred the automated threshold-seeking method. Automated audiometry provides reliable, accurate, and time-efficient hearing assessments for normal-hearing and hearing-impaired adults. Combined with an asynchronous telehealth model it holds significant potential for reaching underserved areas where hearing health professionals are unavailable.

  13. Pure tone audiometry: comparison of general practice and hospital services

    Science.gov (United States)

    Smith, Michael C.F.; Cable, Hugh R.; Wilmot, John F.

    1988-01-01

    Pure tone audiometry was obtained for both ears of 32 children by a general practitioner using a simple audiometer in his surgery, and by audiometricians in a hospital department on the same day. Comparing the worst hearing threshold at any of the three tested frequencies, the general practitioner did not find any ears to hear more than 10 dB better than the hospital (no false negatives). However, there were six false positives (9%) where the general practitioner identified an apparent hearing loss of greater than 15 dB. It is concluded that pure tone audiometry could be carried out accurately in the practice. PMID:3267745

  14. Automated Smartphone Threshold Audiometry: Validity and Time Efficiency.

    Science.gov (United States)

    van Tonder, Jessica; Swanepoel, De Wet; Mahomed-Asmail, Faheema; Myburgh, Hermanus; Eikelboom, Robert H

    2017-03-01

    Smartphone-based threshold audiometry with automated testing has the potential to provide affordable access to audiometry in underserved contexts. To validate the threshold version (hearTest) of the validated hearScreen™ smartphone-based application using inexpensive smartphones (Android operating system) and calibrated supra-aural headphones. A repeated measures within-participant study design was employed to compare air-conduction thresholds (0.5-8 kHz) obtained through automated smartphone audiometry to thresholds obtained through conventional audiometry. A total of 95 participants were included in the study. Of these, 30 were adults, who had known bilateral hearing losses of varying degrees (mean age = 59 yr, standard deviation [SD] = 21.8; 56.7% female), and 65 were adolescents (mean age = 16.5 yr, SD = 1.2; 70.8% female), of which 61 had normal hearing and the remaining 4 had mild hearing losses. Threshold comparisons were made between the two test procedures. The Wilcoxon signed-ranked test was used for comparison of threshold correspondence between manual and smartphone thresholds and the paired samples t test was used to compare test time. Within the adult sample, 94.4% of thresholds obtained through smartphone and conventional audiometry corresponded within 10 dB or less. There was no significant difference between smartphone (6.75-min average, SD = 1.5) and conventional audiometry test duration (6.65-min average, SD = 2.5). Within the adolescent sample, 84.7% of thresholds obtained at 0.5, 2, and 4 kHz with hearTest and conventional audiometry corresponded within ≤5 dB. At 1 kHz, 79.3% of the thresholds differed by ≤10 dB. There was a significant difference (p audiometry test duration (3.23 min, SD = 0.6). The hearTest application with calibrated supra-aural headphones provides a cost-effective option to determine valid air-conduction hearing thresholds.

  15. Early hearing loss detection in rheumatoid arthritis and primary Sjögren syndrome using extended high frequency audiometry.

    Science.gov (United States)

    Galarza-Delgado, Dionicio Angel; Villegas Gonzalez, Mario Jesus; Riega Torres, Janett; Soto-Galindo, German A; Mendoza Flores, Lidia; Treviño González, José Luis

    2018-02-01

    The aim of this study is to evaluate the hearing behavior of rheumatoid arthritis (RA) and primary Sjögren syndrome (PSS) patients and compare them with a healthy control group and with each other. A comparative cross-sectional study was performed with a group of 117 female RA patients, a group of 60 female PSS patients, and a 251 female healthy control group. Every subject underwent a series of studies including high-frequency audiometry, speech audiometry, and tympanometry. The high-frequency audiometry measured 250 to 16,000 Hz. The 117 patients with RA and the 60 with PSS were diagnosed according to American College of Rheumatology criteria / ACR 2010, and the validated classification of the American-European Consensus Group. Hearing loss was present in 36.8% of the RA group in 500-3000 Hz, 68.4% in 4000-8000 Hz, and 94.9% in 10,000-16,000 Hz. Hearing loss was present in 60% of the PSS group in 500-3000 Hz, 70% in 4000-8000 Hz, and 100% in 10,000-16,000 Hz. The hearing impairment prevalence of both groups was significantly different (p < 0.05) when compared with the healthy control group. We also compared the hearing thresholds between RA and PSS patients, finding a significant hearing threshold increase in 500-3000 Hz of the PSS group. This study consolidates the association between RA and PSS with hearing impairment. A deeper hearing loss was reported in PSS than in RA patients, demonstrating a greater auditory and speech recognition repercussion.

  16. Visual reinforcement audiometry: an Adobe Flash based approach.

    Science.gov (United States)

    Atherton, Steve

    2010-09-01

    Visual Reinforcement Audiometry (VRA) is a key behavioural test for young children. It is central to the diagnosis of hearing-impaired infants (1) . Habituation to the visual reinforcement can give misleading results. Medical Illustration ABM University Health Board has designed a collection of Flash animations to overcome this.

  17. [The effect of noise trauma on speech discrimination in silence and under influence of party noise (author's transl)].

    Science.gov (United States)

    Weibel, H P; Kiessling, J

    1978-11-22

    Speech audiometry investigations were carried out in silence and under action of cocktail party noise in 44 soldiers. The testing subjects were grouped according to age and degree of noise lesion. The statistical evaluation of discrimination losses measured in silence and under acting party noise indicated that noise lesions induced considerable discrimination losses even in young subjects particularly under cocktail party noise conditions. Discrimination decreases significantly as the degree of noise trauma increases. In order to assess the real effect of the hearing loss caused by noise trauma upon speech discrimination, the tests of speech audiometry should be performed under noise conditions.

  18. Validity of automated threshold audiometry: a systematic review and meta-analysis.

    Science.gov (United States)

    Mahomed, Faheema; Swanepoel, De Wet; Eikelboom, Robert H; Soer, Maggi

    2013-01-01

    A systematic literature review and meta-analysis on the validity (test-retest reliability and accuracy) of automated threshold audiometry compared with the gold standard of manual threshold audiometry was conducted. A systematic literature review was completed in peer-reviewed databases on automated compared with manual threshold audiometry. Subsequently a meta-analysis was conducted on the validity of automated audiometry. A multifaceted approach, covering several databases and using different search strategies was used to ensure comprehensive coverage and to cross-check search findings. Databases included: MEDLINE, Scopus, and PubMed; a secondary search strategy was the review of references from identified reports. Reports including within-subject comparisons of manual and automated threshold audiometry were selected according to inclusion/exclusion criteria before data were extracted. For the meta-analysis weighted mean differences (and standard deviations) on test-retest reliability for automated compared with manual audiometry were determined to assess the validity of automated threshold audiometry. In total, 29 reports on automated audiometry (method of limits and the method of adjustment techniques) met the inclusion criteria and were included in this review. Most reports included data on adult populations using air conduction testing with limited data on children, bone conduction testing and the effects of hearing status on automated audiometry. Meta-analysis test-retest reliability for automated audiometry was within typical test-retest variability for manual audiometry. Accuracy results on the meta-analysis indicated overall average differences between manual and automated air conduction audiometry (0.4 dB, 6.1 SD) to be comparable with test-retest differences for manual (1.3 dB, 6.1 SD) and automated (0.3 dB, 6.9 SD) audiometry. No significant differences (p > 0.01; summarized data analysis of variance) were seen in any of the comparisons between test

  19. Speech Problems

    Science.gov (United States)

    ... Plan Hot Topics Flu Facts Arrhythmias Abuse Speech Problems KidsHealth > For Teens > Speech Problems Print A A ... form speech sounds into words. What Causes Speech Problems? Normal speech might seem effortless, but it's actually ...

  20. Realizace počítačové audiometrie

    OpenAIRE

    Solnický, Jan

    2010-01-01

    Tato práce se zabývá realizací počítačové audiometrie pro subjektivní vyšetření sluchu. V práci je popsána implementace audiometru v prostředí C++ Borland Builder. Navržený audiometr se skládá ze standardního počítače s operačním systémem Windows, zvukové karty a sluchátek. Práce také obsahuje rozbor problematiky poruch sluchu a jejích vyšetření, které byly použity při implementaci audiometru. This project describes the implementation of computer audiometry for subjective hearing tests. Th...

  1. Automated audiometry using Apple iOS-based application technology

    OpenAIRE

    Foulad, A; Bui, P; Djalilian, H

    2013-01-01

    Objective. The aim of this study is to determine the feasibility of an Apple iOS-based automated hearing testing application and to compare its accuracy with conventional audiometry. Study Design. Prospective diagnostic study. Setting. Academic medical center. Subjects and Methods. An iOS-based software application was developed to perform automated pure-tone hearing testing on the iPhone, iPod touch, and iPad. To assess for device variations and compatibility, preliminary work was performed ...

  2. Occupational exposure to anaesthetic gases and high-frequency audiometry.

    Science.gov (United States)

    Giorgianni, Concetto; Gangemi, Silvia; Tanzariello, Maria Giuseppina; Barresi, Gaetano; Miceli, Ludovica; D'Arrigo, Graziella; Spatari, Giovanna

    2015-09-01

    Occupational exposure to anaestethic gases has been suggested to induce auditory damages. The aim of this study is to investigate high-frequency audiometric responses in subjects exposed to anaesthetic gases, in order to highlight the possible effects on auditory system. The study was performed on a sample of 30 medical specialists of Messina University Anaesthesia and Intensive care. We have used tonal audiometry as well as high-frequency one. We have compared the responses with those obtained in a similar control group not exposed to anaesthetic gases. Results were compared statistically. Results show a strong correlation (p = 0.000) between left and right ear responses to all the audiometric tests. The exposed and the control group run though the standard audiometry analysis plays different audiometric responses up only to higher frequencies (2000 HZ p = 0.009 and 4000 Hz p = 0.04); in high-frequency audiometry, as all other frequencies, the attention is drew to the fact that the sample groups distinguish themselves in a significantly statistic way (10,000 Hz p = 0.025, 12,000 Hz p = 0.008, 14,000 Hz p = 0.026, 16,000 Hz p = 0.08). The highest values are the ones related to exposed subjects both in standard (2000 Hz p = 0.01, 4000 Hz p = 0.02) and in high-frequency audiometry (10,000 Hz p = 0.011, 12,000 Hz p = 0.004, 14,000 Hz p = 0.012, 16,000 Hz p = 0.004). Results, even if preliminary and referred to a low-range sample, show an involvement of the anatomic structure responsible for the perception of high-frequency audiometric responses in subjects exposed to anaesthetic gases. © The Author(s) 2012.

  3. Extended high frequency audiometry in users of personal listening devices.

    Science.gov (United States)

    Kumar, Poornima; Upadhyay, Prabhakar; Kumar, Ashok; Kumar, Sunil; Singh, Gautam Bir

    Noise exposure leads to high frequency hearing loss. Use of Personal Listening Devices may lead to decline in high frequency hearing sensitivity because of prolonged exposure to these devices at high volume. This study explores the changes in hearing thresholds by Extended High Frequency audiometry in users of personal listening devices. A descriptive, hospital based observational study was performed with total 100 subjects in age group of 15-30years. Subjects were divided in two groups consisting of 30 subjects (Group A) with no history of Personal Listening Devices use and (Group B) having 70 subjects with history of use of Personal Listening Devices. Conventional pure tone audiometry with extended high frequency audiometry was performed in all the subjects. Significant differences in hearing thresholds of Personal Listening Device users were seen at high frequencies (3kHz, 4kHz and 6kHz) and extended high frequencies (9kHz, 10kHz, 11kHz, 13kHz, 14kHz, 15kHz and 16kHz) with p value 5years usage at high volume. Thus, it can be reasonably concluded that extended high frequencies can be used for early detection of NIHL in PLD users. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. The Frequency of Hearing Loss and Hearing Aid Prescription in the Clients of the Avesina Education and Health Center, Audiometry Clinic, 1377

    Directory of Open Access Journals (Sweden)

    Abbas Bastani

    2003-08-01

    Full Text Available Objective: Determining the frequency of hearing disorders and hearing aid using in the clients referring to the Avesina education and health center, audiometry clinic, 1377. Method and Material: This is an assesive-descriptive survey that conducted on more than 2053 (1234 males and 819 females who referred for audiometry after examination by a physician. Case history, otoscopy, PTA, speech and immittance audiometry were conducted for all the clients. The findings were expressed in tables and diagrams of frequency. The age and sex relationship. All types of hearing losses and the number of the hearing-impaired clients need a hearing aid were assessed. Findings: 56% of this population were hearing-impaired and 44% had normal hearing were hearing. 60% were males and 40% females. Of the hearing-impaired, 44% had SNHL, 35.6% CHL and 8.2% mixed hearing loss. The hearing aid was prescribed for 204 (83 females and121 males if they need that only 20 females and 32 males wear it. Conclusion: It this sample, SNHL is of higher frequency. According to this survey, the more the age, the more the hearing aid is accepted (85% of wearer are more than 49 the prevalence of the hearing impaired males are more than females (60% versus 40%. Only 25% of the hearing-impaired wear hearing aids.

  5. Smartphone threshold audiometry in underserved primary health-care contexts.

    Science.gov (United States)

    Sandström, Josefin; Swanepoel, De Wet; Carel Myburgh, Hermanus; Laurent, Claude

    2016-01-01

    To validate a calibrated smartphone-based hearing test in a sound booth environment and in primary health-care clinics. A repeated-measure within-subject study design was employed whereby air-conduction hearing thresholds determined by smartphone-based audiometry was compared to conventional audiometry in a sound booth and a primary health-care clinic environment. A total of 94 subjects (mean age 41 years ± 17.6 SD and range 18-88; 64% female) were assessed of whom 64 were tested in the sound booth and 30 within primary health-care clinics without a booth. In the sound booth 63.4% of conventional and smartphone thresholds indicated normal hearing (≤15 dBHL). Conventional thresholds exceeding 15 dB HL corresponded to smartphone thresholds within ≤10 dB in 80.6% of cases with an average threshold difference of -1.6 dB ± 9.9 SD. In primary health-care clinics 13.7% of conventional and smartphone thresholds indicated normal hearing (≤15 dBHL). Conventional thresholds exceeding 15 dBHL corresponded to smartphone thresholds within ≤10 dB in 92.9% of cases with an average threshold difference of -1.0 dB ± 7.1 SD. Accurate air-conduction audiometry can be conducted in a sound booth and without a sound booth in an underserved community health-care clinic using a smartphone.

  6. Validation of a Self-Administered Audiometry Application: An Equivalence Study.

    Science.gov (United States)

    Whitton, Jonathon P; Hancock, Kenneth E; Shannon, Jeffrey M; Polley, Daniel B

    2016-10-01

    To compare hearing measurements made at home using self-administered audiometric software against audiological tests performed on the same subjects in a clinical setting Prospective, crossover equivalence study In experiment 1, adults with varying degrees of hearing loss (N = 19) performed air-conduction audiometry, frequency discrimination, and speech recognition in noise testing twice at home with an automated tablet application and twice in sound-treated clinical booths with an audiologist. The accuracy and reliability of computer-guided home hearing tests were compared to audiologist administered tests. In experiment 2, the reliability and accuracy of pure-tone audiometric results were examined in a separate cohort across a variety of clinical settings (N = 21). Remote, automated audiograms were statistically equivalent to manual, clinic-based testing from 500 to 8,000 Hz (P ≤ .02); however, 250 Hz thresholds were elevated when collected at home. Remote and sound-treated booth testing of frequency discrimination and speech recognition thresholds were equivalent (P ≤ 5 × 10(-5) ). In the second experiment, remote testing was equivalent to manual sound-booth testing from 500 to 8,000 Hz (P ≤ .02) for a different cohort who received clinic-based testing in a variety of settings. These data provide a proof of concept that several self-administered, automated hearing measurements are statistically equivalent to manual measurements made by an audiologist in the clinic. The demonstration of statistical equivalency for these basic behavioral hearing tests points toward the eventual feasibility of monitoring progressive or fluctuant hearing disorders outside of the clinic to increase the efficiency of clinical information collection. 2b. Laryngoscope, 126:2382-2388, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Role of high-frequency audiometry in the early detection of ototoxicity. II. Clinical Aspects

    NARCIS (Netherlands)

    Dreschler, W. A.; van der Hulst, R. J.; Tange, R. A.; Urbanus, N. A.

    1989-01-01

    As a supplement to a previous paper [Dreschler et al.: Audiology 1985; 24:387-395] high-frequency (HF) audiometry was applied to compare the ototoxic effects of two different drug administration protocols for cis-platinum (CDDP). In both subgroups, HF audiometry considerably enhanced the early

  8. Brain stem evoked response audiometry of former drug users.

    Science.gov (United States)

    Weich, Tainara Milbradt; Tochetto, Tania Maria; Seligman, Lilian

    2012-10-01

    Illicit drugs are known for their deleterious effects upon the central nervous system and more specifically for how they adversely affect hearing. This study aims to analyze and compare the hearing complaints and the results of brainstem evoked response audiometry (BERA) of former drug user support group goers. This is a cross-sectional non-experimental descriptive quantitative study. The sample consisted of 17 subjects divided by their preferred drug of use. Ten individuals were placed in the marijuana group (G1) and seven in the crack/cocaine group (G2). The subjects were further divided based on how long they had been using drugs: 1 to 5 years, 6 to 10 years, and over 15 years. They were interviewed, and assessed by pure tone audiometry, acoustic impedance tests, and BERA. No statistically significant differences were found between G1 and G2 or time of drug use in absolute latencies and interpeak intervals. However, only five of the 17 individuals had BERA results with adequate results for their ages. Marijuana and crack/cocaine may cause diffuse disorders in the brainstem and compromise the transmission of auditory stimuli regardless of how long these substances are used for.

  9. [On the reliability of brainstem electric response audiometry (BERA)].

    Science.gov (United States)

    Renne, C; Olthoff, A

    2012-09-01

    Brainstem electric response audiometries (BERA) are in clinical use for a number of years. The aim of our study was to evaluate data regarding the long-term reliability of BERA-determined frequency specific thresholds in hearing disabled children. In a group of 97 hearing disabled children we sought to compare Notched-Noise- (NN) BERA threshold as well as Click-BERA thresholds taken shortly after birth with behavioral audiometry thresholds determined after 3.2 years (mean). We found a significant correlation between both BERA methods and the behavioral tests. However, the correlation coefficients for NN-BERA were higher than for Click-BERA thresholds. Our results provide evidence for a high reliability of the NN-BERA for characterization of early onset hearing disabilities in children. Our data suggest that pathologic findings in the Click-BERA should always be followed by a frequency specific analysis with NN-BERA. © Georg Thieme Verlag KG Stuttgart · New York.

  10. A user-operated audiometry method based on the maximum likelihood principle and the two-alternative forced-choice paradigm.

    Science.gov (United States)

    Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben; Christensen-Dalsgaard, Jakob; Andersen, Ture; Poulsen, Torben; Bælum, Jesper

    2014-06-01

    To create a user-operated pure-tone audiometry method based on the method of maximum likelihood (MML) and the two-alternative forced-choice (2AFC) paradigm with high test-retest reliability without the need of an external operator and with minimal influence of subjects' fluctuating response criteria. User-operated audiometry was developed as an alternative to traditional audiometry for research purposes among musicians. Test-retest reliability of the user-operated audiometry system was evaluated and the user-operated audiometry system was compared with traditional audiometry. Test-retest reliability of user-operated 2AFC audiometry was tested with 38 naïve listeners. User-operated 2AFC audiometry was compared to traditional audiometry in 41 subjects. The repeatability of user-operated 2AFC audiometry was comparable to traditional audiometry with standard deviation of differences from 3.9 dB to 5.2 dB in the frequency range of 250-8000 Hz. User-operated 2AFC audiometry gave thresholds 1-2 dB lower at most frequencies compared to traditional audiometry. User-operated 2AFC audiometry does not require specific operating skills and the repeatability is acceptable and similar to traditional audiometry. User operated 2AFC audiometry is a reliable alternative to traditional audiometry.

  11. Sensorineural hearing loss among cerebellopontine-angle tumor patients examined with pure tone audiometry and brainstem-evoked response audiometry

    Science.gov (United States)

    Rinindra, A. M.; Zizlavsky, S.; Bashiruddin, J.; Aman, R. A.; Wulani, V.; Bardosono, S.

    2017-08-01

    Tumor in the cerebellopontine angle (CPA) accurs for approximately 5-10% of all intracranial tumors, where unilateral hearing loss and tinnitus are the most frequent symptoms. This study aimed to collect data on sensorineural hearing loss in CPA tumor patients in Dr. Cipto Mangunkusumo Hospital (CMH) using pure tone audiometry and brainstem-evoked response audiometry (BERA). It also aimed to obtaine data on CPA-tumor imaging through magnetic resonance imaging (MRI). This was a descriptive, analytic, and cross-sectional study. The subjects of this study were gathered using a total sampling method from secondary data between July 2012 and November 2016. From 104 patients, 30 matched the inclusion criteria. The CPA-tumor patients in the ENT CMH outpatient clinic were mostly female, middle-aged patients (41-60 years) whose clinical presentation was mostly tinnitus and severe, asymmetric sensorineural hearing loss in 10 subjects. From 30 subjects, 29 showed ipsilaterally impaired BERA results, and 17 subjects showed contralaterally impaired BERA results. There were 24 subjects who with large-sized tumors and 19 subjects who had intracanal tumors that had spread until they were extracanal in 19 subjects.

  12. Extended High Frequency Audiometry in Polycystic Ovary Syndrome

    Directory of Open Access Journals (Sweden)

    Cuneyt Kucur

    2013-01-01

    and BMI of PCOS and control groups were comparable. Each subject was tested with low (250–2000 Hz, high (4000–8000 Hz, and extended high frequency audiometry (8000–20000. Hormonal and biochemical values including LH, LH/FSH, testosterone, fasting glucose, fasting insulin, HOMA-I, and CRP were calculated. Results. PCOS patients showed high levels of LH, LH/FSH, testosterone, fasting insulin, glucose, HOMA-I, and CRP levels. The hearing thresholds of the groups were similar at frequencies of 250, 500, 1000, 2000, and 4000 Hz; statistically significant difference was observed in 8000–14000 Hz in PCOS group compared to control group. Conclusion. PCOS patients have hearing impairment especially in extended high frequencies. Further studies are needed to help elucidate the mechanism behind hearing impairment in association with PCOS.

  13. Hearing symptoms and audiometry in professional divers and offshore workers.

    Science.gov (United States)

    Ross, John A S; Macdiarmid, Jennifer I; Dick, Finlay D; Watt, Stephen J

    2010-01-01

    The aims are to compare hearing loss between professional divers and offshore workers and to study whether hearing loss symptoms reflected physical disorder. A secondary objective was to study total threshold shift assessment as a method of detecting noise-induced hearing loss (NIHL). Participants (151 divers and 120 offshore workers) completed a questionnaire for symptoms and screening audiometry. Audiograms were assessed for total threshold shift at 1, 2, 3, 4 and 6 kHz and the prevalence of referral (within population 5th centile) or warning levels (within population 20th centile) of hearing loss. Audiograms were assessed for an NIHL pattern at four levels by two occupational physicians. Hearing loss symptoms were commoner in divers at all levels of hearing loss regardless of differences between groups on audiometry. Hearing loss in offshore workers was within the population age-adjusted norm. Thirteen per cent of divers were within the 5th percentile for threshold shift for the population norm in contrast to 4% of offshore workers and this was predominantly left sided (OR 3.16, 95% CI 1.13-8.93). This difference was lost after adjustment for history of regular exposure to explosion or gunfire. Divers were more likely to have a pattern of severe NIHL on the left (OR 4.61, 95% CI 1.39-15.39, P < 0.05). Approximately 50% of participants with severe NIHL did not have a referral level of hearing loss. Divers suffer more NIHL than a control population. Current guidance on the assessment of total threshold shift for the detection of significant NIHL was inadequate in the sample studied.

  14. Speech Auditory Brainstem Response through hearing aid stimulation.

    Science.gov (United States)

    Bellier, Ludovic; Veuillet, Evelyne; Vesson, Jean-François; Bouchet, Patrick; Caclin, Anne; Thai-Van, Hung

    2015-07-01

    Millions of people across the world are hearing impaired, and rely on hearing aids to improve their everyday life. Objective audiometry could optimize hearing aid fitting, and is of particular interest for non-communicative patients. Speech Auditory Brainstem Response (speech ABR), a fine electrophysiological marker of speech encoding, is presently seen as a promising candidate for implementing objective audiometry; yet, unlike lower-frequency auditory-evoked potentials (AEPs) such as cortical AEPs or auditory steady-state responses (ASSRs), aided-speech ABRs (i.e., speech ABRs through hearing aid stimulation) have almost never been recorded. This may be due to their high-frequency components requesting a high temporal precision of the stimulation. We assess here a new approach to record high-quality and artifact-free speech ABR while stimulating directly through hearing aids. In 4 normal-hearing adults, we recorded speech ABR evoked by a /ba/ syllable binaurally delivered through insert earphones for quality control or through hearing aids. To assess the presence of a potential stimulus artifact, recordings were also done in mute conditions with the exact same potential sources of stimulus artifacts as in the main runs. Hearing aid stimulation led to artifact-free speech ABR in each participant, with the same quality as when using insert earphones, as shown with signal-to-noise (SNR) measurements. Our new approach consisting in directly transmitting speech stimuli through hearing aids allowed for a perfect temporal precision mandatory in speech ABR recordings, and could thus constitute a decisive step in hearing impairment investigation and in hearing aid fitting improvement. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Speech perception performance of subjects with type I diabetes mellitus in noise

    Directory of Open Access Journals (Sweden)

    Bárbara Cristiane Sordi Silva

    Full Text Available Abstract Introduction: Diabetes mellitus (DM is a chronic metabolic disorder of various origins that occurs when the pancreas fails to produce insulin in sufficient quantities or when the organism fails to respond to this hormone in an efficient manner. Objective: To evaluate the speech recognition in subjects with type I diabetes mellitus (DMI in quiet and in competitive noise. Methods: It was a descriptive, observational and cross-section study. We included 40 participants of both genders aged 18-30 years, divided into a control group (CG of 20 healthy subjects with no complaints or auditory changes, paired for age and gender with the study group, consisting of 20 subjects with a diagnosis of DMI. First, we applied basic audiological evaluations (pure tone audiometry, speech audiometry and immittance audiometry for all subjects; after these evaluations, we applied Sentence Recognition Threshold in Quiet (SRTQ and Sentence Recognition Threshold in Noise (SRTN in free field, using the List of Sentences in Portuguese test. Results: All subjects showed normal bilateral pure tone threshold, compatible speech audiometry and "A" tympanometry curve. Group comparison revealed a statistically significant difference for SRTQ (p = 0.0001, SRTN (p < 0.0001 and the signal-to-noise ratio (p < 0.0001. Conclusion: The performance of DMI subjects in SRTQ and SRTN was worse compared to the subjects without diabetes.

  16. Extended high-frequency audiometry in subjects exposed to occupational noise.

    Science.gov (United States)

    Korres, G S; Balatsouras, D G; Tzagaroulakis, A; Kandiloros, D; Ferekidis, E

    2008-01-01

    The aim of this study was to evaluate hearing in a population of industrial workers exposed to occupational noise by using both conventional and extended high-frequency (EHF) audiometry, and to compare our results with the findings from a control group. A total of 139 industry workers exposed to noise were examined over a period of two years and 32 healthy subjects were used as controls. Conventional audiometry in the frequency range 0.25-8 kHz and EHF audiometry in the frequency range 9-20 kHz were performed. Thresholds in the noise-exposed group were higher than in the control group for both standard and extended high frequencies, but variability was greater in EHF. Larger differences were found in the 4,000-18,000 Hz frequency region, and especially in the 12,500-18,000 frequency zone. A statistically significant correlation between the elevation of puretone thresholds and time of exposure was found across all frequencies (from 250 to 20,000 Hz), with the exception of 10,000 Hz. EHF audiometry is a useful adjunct to conventional audiometry in the audiological assessment of subjects exposed to occupational noise. This test performs well in the frequency range 12,500-18,000 Hz, but there is greater variability in the results compared with conventional audiometry.

  17. Evoked response audiometry in scrub typhus: prospective, randomised, case-control study.

    Science.gov (United States)

    Thakur, J S; Mohindroo, N K; Sharma, D R; Soni, K; Kaushal, S S

    2011-06-01

    To investigate the hypothesis of cochlear and retrocochlear damage in scrub typhus, using evoked response audiometry. Prospective, randomised, case-control study. The study included 25 patients with scrub typhus and 25 controls with other febrile illnesses not known to cause hearing loss. Controls were age- and sex-matched. All subjects underwent pure tone audiometry and evoked response audiometry before commencing treatment. Six patients presented with hearing loss, although a total of 23 patients had evidence of symmetrical high frequency loss on pure tone audiometry. Evoked response audiometry found significant prolongation of absolute latencies of wave I, III, V, and wave I-III interpeak latency. Two cases with normal hearing had increased interpeak latencies. These findings constitute level 3b evidence. Findings were suggestive of retrocochlear pathology in two cases with normal hearing. In other patients, high frequency hearing loss may have led to altered evoked response results. Although scrub typhus appears to cause middle ear cochlear and retrocochlear damage, the presence of such damage could not be fully confirmed by evoked response audiometry.

  18. Extended high-frequency audiometry (9,000-20,000 Hz). Usefulness in audiological diagnosis.

    Science.gov (United States)

    Rodríguez Valiente, Antonio; Roldán Fidalgo, Amaya; Villarreal, Ithzel M; García Berrocal, José R

    2016-01-01

    Early detection and appropriate treatment of hearing loss are essential to minimise the consequences of hearing loss. In addition to conventional audiometry (125-8,000 Hz), extended high-frequency audiometry (9,000-20,000 Hz) is available. This type of audiometry may be useful in early diagnosis of hearing loss in certain conditions, such as the ototoxic effect of cisplatin-based treatment, noise exposure or oral misunderstanding, especially in noisy environments. Eleven examples are shown in which extended high-frequency audiometry has been useful in early detection of hearing loss, despite the subject having a normal conventional audiometry. The goal of the present paper was to highlight the importance of the extended high-frequency audiometry examination for it to become a standard tool in routine audiological examinations. Copyright © 2015 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  19. Noise induced hearing loss: Screening with pure-tone audiometry and speech-in-noise testing

    NARCIS (Netherlands)

    Leensen, M.C.J.

    2013-01-01

    Noise-induced hearing loss (NIHL) is a highly prevalent public health problem, caused by exposure to loud noises both during leisure time, e.g. by listening to loud music, and during work. In the past years NIHL was the most commonly reported occupational disease in the Netherlands. Hearing damage

  20. Pure-tone and speech audiometry in patients with Meniere's disease

    NARCIS (Netherlands)

    Mateijsen, DJM; Van Hengel, PWJ; Van Huffelen, WM; Wit, HP; Albers, FWJ

    2001-01-01

    The aim of this study was to reinvestigate many of the claims in the literature about hearing loss in patients with Meniere's disease, We carried this out on a well-defined group of patients under well-controlled circumstances. Thus, we were able to find support for sonic claims and none for many

  1. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  2. Early diagnosis of hearing loss: otoacoustic emissions evoked by distortion products and pure-tone audiometry: Preliminary findings.

    Science.gov (United States)

    Capozzella, A; Loreti, B; Sacco, C; Casale, T; Pimpinella, B; Andreozzi, G; Bernardini, A; Nieto, H A; Scala, B; Schifano, M P; Bonomi, S; Altissimi, G; De Sio, S; Cianfrone, G; Tomei, F; Rosati, M V; Sancini, A

    Literary studies underline the effectiveness of distortion product otoacoustic emissions (DPOAEs), which are not affected by the collaboration of the subject examined, in the early diagnosis of hearing loss. Aim of the study is to compare the objective technique of DPOAEs with respect to the pure-tone audiometry in early diagnosis of hearing loss. The clinical research was carried out on 852 workers. All subjects underwent pure-tone audiometry, tympanometry and distortion products. The results show: a) a prevalence of subjects with impaired DPOAEs higher than the prevalence of subjects with impaired audiometries in the studied samples; and, after division by gender: b) a prevalence of subjects with impaired DPOAEs higher than the prevalence of subjects with impaired audiometries only in men; c) a prevalence of impaired DPOAEs and of impaired audiometries in men higher than in women. The results suggest the higher effectiveness of DPOAEs compared to pure-tone audiometry in making an early diagnosis of hearing loss.

  3. Mobile tablet audiometry in fluctuating autoimmune ear disease.

    Science.gov (United States)

    Kohlert, Scott; Bromwich, Matthew

    2017-03-07

    Autoimmune inner ear disease (AIED) is a rare condition characterized by bilateral fluctuating sensorineural hearing loss (SNHL). The labile nature of this hearing loss makes it difficult to accurately quantify with conventional methods, and therefore it is challenging to rehabilitate. Over a 9-month period one pediatric patient with severe AIED was monitored and conducted home audiograms using a previously validated testing system (Shoebox Audiometry). During this period he also underwent several clinical audiograms. The correlation between clinical and home audiograms was analyzed with a Pearson coefficient, and the range and frequency of fluctuations was recorded. Sixty-four automated home audiograms and nine clinical audiograms were conducted. When tested at home using a calibrated system the pure tone average (PTA) fluctuated between 12 dB and 72 dB indicating large variability in hearing. Fluctuations were frequent: on 28 occasions the PTA varied by at least 5 dB when retested within 4 days. The mean PTA was 50 dB and 95% of the thresholds were within 36 dB of the mean. Clinical audiograms obtained on the same day or within 1 day of home testing were highly concordant (with a Pearson coefficient of 0.93). AIED can result in significant fluctuations in hearing over short periods of time. Home testing enables a more granular look at variations over time and correlates well with clinical testing, and thus facilitates rapid action and informed rehabilitation.

  4. Extended high frequency audiometry in polycystic ovary syndrome.

    Science.gov (United States)

    Kucur, Cuneyt; Kucur, Suna Kabil; Gozukara, Ilay; Seven, Ali; Yuksel, Kadriye Beril; Keskin, Nadi; Oghan, Fatih

    2013-01-01

    Polycystic ovarian syndrome (PCOS) is the most common endocrine disorder affecting 5-10% of women in reproductive age. Insulin resistance, dyslipidemia, glucose intolerance, hypertension, and obesity are metabolic disorders accompanying the syndrome. PCOS is a chronic proinflammatory state and the disease is associated with endothelial dysfunction. In diseases with endothelial damage, hearing in high frequencies are mostly effected in early stages. We evaluated extended high frequency hearing loss in PCOS patients. Forty women diagnosed as PCOS and 25 healthy controls were included in this study. Age and BMI of PCOS and control groups were comparable. Each subject was tested with low (250-2000 Hz), high (4000-8000 Hz), and extended high frequency audiometry (8000-20000). Hormonal and biochemical values including LH, LH/FSH, testosterone, fasting glucose, fasting insulin, HOMA-I, and CRP were calculated. PCOS patients showed high levels of LH, LH/FSH, testosterone, fasting insulin, glucose, HOMA-I, and CRP levels. The hearing thresholds of the groups were similar at frequencies of 250, 500, 1000, 2000, and 4000 Hz; statistically significant difference was observed in 8000-14000 Hz in PCOS group compared to control group. PCOS patients have hearing impairment especially in extended high frequencies. Further studies are needed to help elucidate the mechanism behind hearing impairment in association with PCOS.

  5. Pure tone audiometry and impedance screening of school entrant children by nurses: evaluation in a practical setting.

    OpenAIRE

    Holtby, I; Forster, D P; Kumar, U

    1997-01-01

    BACKGROUND: Screening for hearing loss in English children at entry to school (age 5-6 years) is usually by pure tone audiometry sweep undertaken by school nurses. This study aimed to compare the validity and screening rates of pure tone audiometry with impedance screening in these children. METHODS: Two stage pure tone audiometry and impedance methods of screening were compared in 610 school entry children from 19 infant schools in north east England. Both procedures were completed by school...

  6. Comparison of distortion product otoacoustic emissions and pure tone audiometry in occupational screening for auditory deficit due to noise exposure.

    Science.gov (United States)

    Wooles, N; Mulheran, M; Bray, P; Brewster, M; Banerjee, A R

    2015-12-01

    To examine whether distortion product otoacoustic emissions can serve as a replacement for pure tone audiometry in longitudinal screening for occupational noise exposure related auditory deficit. A retrospective review was conducted of pure tone audiometry and distortion product otoacoustic emission data obtained sequentially during mandatory screening of brickyard workers (n = 16). Individual pure tone audiometry thresholds were compared with distortion product otoacoustic emission amplitudes, and a correlation of these measurements was conducted. Pure tone audiometry threshold elevation was identified in 13 out of 16 workers. When distortion product otoacoustic emission amplitudes were compared with pure tone audiometry thresholds at matched frequencies, no evidence of a robust relationship was apparent. Seven out of 16 workers had substantial distortion product otoacoustic emissions with elevated pure tone audiometry thresholds. No clinically relevant predictive relationship between distortion product otoacoustic emission amplitude and pure tone audiometry threshold was apparent. These results do not support the replacement of pure tone audiometry with distortion product otoacoustic emissions in screening. Distortion product otoacoustic emissions at frequencies associated with elevated pure tone audiometry thresholds are evidence of intact outer hair cell function, suggesting that sites distinct from these contribute to auditory deficit following ototrauma.

  7. Diagnosis of hearing loss using automated audiometry in an asynchronous telehealth model: A pilot accuracy study.

    Science.gov (United States)

    Brennan-Jones, Christopher G; Eikelboom, Robert H; Swanepoel, De Wet

    2017-02-01

    Introduction Standard criteria exist for diagnosing different types of hearing loss, yet audiologists interpret audiograms manually. This pilot study examined the feasibility of standardised interpretations of audiometry in a telehealth model of care. The aim of this study was to examine diagnostic accuracy of automated audiometry in adults with hearing loss in an asynchronous telehealth model using pre-defined diagnostic protocols. Materials and methods We recruited 42 study participants from a public audiology and otolaryngology clinic in Perth, Western Australia. Manual audiometry was performed by an audiologist either before or after automated audiometry. Diagnostic protocols were applied asynchronously for normal hearing, disabling hearing loss, conductive hearing loss and unilateral hearing loss. Sensitivity and specificity analyses were conducted using a two-by-two matrix and Cohen's kappa was used to measure agreement. Results The overall sensitivity for the diagnostic criteria was 0.88 (range: 0.86-1) and overall specificity was 0.93 (range: 0.86-0.97). Overall kappa ( k) agreement was 'substantial' k = 0.80 (95% confidence interval (CI) 0.70-0.89) and significant at p audiometry provide accurate identification of disabling, conductive and unilateral hearing loss. This method has the potential to improve synchronous and asynchronous tele-audiology service delivery.

  8. Correlation of the CT analysis and audiometry in otosclerosis

    Energy Technology Data Exchange (ETDEWEB)

    Kiyomizu, Kensuke; Tono, Tetsuya; Yang, Dewen; Haruta, Atsushi; Kodama, Takao; Kato, Eiji; Komune, Shizuo [Miyazaki Medical Coll., Kiyotake (Japan)

    1998-11-01

    Thirty-three patients (62 ears) with surgically confirmed otosclerosis underwent a preoperative CT examination in order to determine the presence of any correlation between the audiometric and CT findings. Based on the CT findings, the ears were classified into five groups as follows: group A; 25 ears (40.3%) with normal CT findings, group B1; 15 ears (24.2%) with demineralization in the region of the fissula antefenestram, group B2; 12 ears (19.4%) with demineralization around the anterior to the oval window, group B3; 4 ears (6.5%) with demineralization surrounding the cochlea, and group C; 6 ears (9.7%) with thick anterior and posterior plaques. The expansion of demineralization led to an increase in average bone conduction hearing level: group A ; 27.1 dB, group B1; 30.6 dB, group B2; 34.6 dB, group B3; 36.7 dB, and group C; 30.3 dB. This increase is most likely due to progressive labyrinthine otosclerosis. Group C in the average air-bone gap was greater (37.5 dB) than that in the patients with demineralization, group B1 (21.6 dB), group B2 (28.2 dB), group B3 (26.7 dB), the Carhart effect of group C was smaller than that of any other groups, thus suggesting the mode of otosclerosis progression in group C to be different from that in patients with demineralization. The results of the present study indicate that the preoperative CT findings of otosclerosis correlate with the audiometry findings, thus proving the usefulness of CT in diagnosing otosclerosis. (author)

  9. Speech Development

    Science.gov (United States)

    ... Spotlight Fundraising Ideas Vehicle Donation Volunteer Efforts Speech Development skip to submenu What We Do Cleft & Craniofacial Educational Materials Speech Development To download the PDF version of this factsheet, ...

  10. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  11. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  12. The relationship between tinnitus pitch and parameters of audiometry and distortion product otoacoustic emissions.

    Science.gov (United States)

    Keppler, H; Degeest, S; Dhooge, I

    2017-11-01

    Chronic tinnitus is associated with reduced auditory input, which results in changes in the central auditory system. This study aimed to examine the relationship between tinnitus pitch and parameters of audiometry and distortion product otoacoustic emissions. For audiometry, the parameters represented the edge frequency of hearing loss, the frequency of maximum hearing loss and the frequency range of hearing loss. For distortion product otoacoustic emissions, the parameters were the frequency of lowest distortion product otoacoustic emission amplitudes and the frequency range of reduced distortion product otoacoustic emissions. Sixty-seven patients (45 males, 22 females) with subjective chronic tinnitus, aged 18 to 73 years, were included. No correlation was found between tinnitus pitch and parameters of audiometry and distortion product otoacoustic emissions. However, tinnitus pitch fell mostly within the frequency range of hearing loss. The current study seems to confirm the relationship between tinnitus pitch and the frequency range of hearing loss, thus supporting the homeostatic plasticity model.

  13. Symptom reporting compared with audiometry for the detection of cochleotoxicity in patients on long-term aminoglycoside therapy.

    Science.gov (United States)

    Palmay, Lesley; Walker, Sandra A N; Walker, Scott E; Simor, Andrew E

    2011-05-01

    Aminoglycoside-associated auditory toxicity (cochleotoxicity) is a major concern in patients receiving prolonged aminoglycoside therapy. There are no published data comparing symptom monitoring to audiometry testing for the detection of aminoglycoside-induced cochleotoxicity; thus, agreement regarding the optimal monitoring of these patients for early detection of this effect is lacking. To compare the sensitivity of symptom monitoring to that of audiometry in identifying cochleotoxicity in patients on prolonged aminoglycoside therapy. A retrospective chart review of adult inpatients at Sunnybrook Health Sciences Centre prescribed prolonged aminoglycoside therapy (≥21 days) who completed at least 1 audiometry test between January 1, 1999, and December 31, 2009, was conducted. Data pertaining to results of audiometry testing and development of symptoms of auditory toxicity were collected. Symptom monitoring was compared with audiology testing for the detection of cochleotoxicity. Forty eligible patients were included for analysis. Audiometry was significantly better than symptom monitoring to identify early cochleotoxicity (absolute risk reduction = 17.5% and number needed to treat = 6; p = 0.023). Compared to audiometry, symptom monitoring has a sensitivity, negative predictive value, and accuracy for the detection of early cochleotoxicity of 61%, 75%, and 82%, respectively. Audiometry testing is significantly better than monitoring symptoms to identify early aminoglycoside-induced auditory toxicity in patients prescribed prolonged aminoglycoside therapy (≥21 days). Subclinical cochleotoxicity identified with audiometry may allow early termination of aminoglycoside therapy to prevent progression of cochlear damage to the audible frequency range.

  14. Unwanted sounds generated with test tone presentation can spoil extended high-frequency audiometry.

    Science.gov (United States)

    Kurakata, Kenji; Mizunami, Tazu; Matsushita, Kazuma; Shiraishi, Kimio

    2010-10-01

    Unwanted sounds from a commercially available audiometer were evaluated in terms of their effects on extended high-frequency (EHF) audiometry. Although the manufacturer reported that the audiometer conformed to relevant International Electrotechnical Commission (IEC) standards, the audiograms obtained using the audiometer were erroneous because the subjects had responded falsely to noise generated with the test tone presentation before detecting the test tone. Analyses of acoustic and electric output signals revealed that the audiometer generated most of the unwanted sounds, not the earphones that were used. Based on the measurement results, clinical implications of the measurement results are discussed for conducting more reliable EHF audiometry.

  15. A Low Cost Setup for Behavioral Audiometry in Rodents

    Science.gov (United States)

    Tziridis, Konstantin; Ahlf, Sönke; Schulze, Holger

    2012-01-01

    In auditory animal research it is crucial to have precise information about basic hearing parameters of the animal subjects that are involved in the experiments. Such parameters may be physiological response characteristics of the auditory pathway, e.g. via brainstem audiometry (BERA). But these methods allow only indirect and uncertain extrapolations about the auditory percept that corresponds to these physiological parameters. To assess the perceptual level of hearing, behavioral methods have to be used. A potential problem with the use of behavioral methods for the description of perception in animal models is the fact that most of these methods involve some kind of learning paradigm before the subjects can be behaviorally tested, e.g. animals may have to learn to press a lever in response to a sound. As these learning paradigms change perception itself 1,2 they consequently will influence any result about perception obtained with these methods and therefore have to be interpreted with caution. Exceptions are paradigms that make use of reflex responses, because here no learning paradigms have to be carried out prior to perceptual testing. One such reflex response is the acoustic startle response (ASR) that can highly reproducibly be elicited with unexpected loud sounds in naïve animals. This ASR in turn can be influenced by preceding sounds depending on the perceptibility of this preceding stimulus: Sounds well above hearing threshold will completely inhibit the amplitude of the ASR; sounds close to threshold will only slightly inhibit the ASR. This phenomenon is called pre-pulse inhibition (PPI) 3,4, and the amount of PPI on the ASR gradually depends on the perceptibility of the pre-pulse. PPI of the ASR is therefore well suited to determine behavioral audiograms in naïve, non-trained animals, to determine hearing impairments or even to detect possible subjective tinnitus percepts in these animals. In this paper we demonstrate the use of this method in a

  16. Development and evaluation of a computerized Mandarin speech test system in China.

    Science.gov (United States)

    Wu, Wufang; Zhang, Hua; Chen, Jing; Chen, Jianyong; Lin, Changyan

    2011-03-01

    This study reports the development and evaluation of a Computerized Mandarin Speech Test System (CMSTS). Taking into account the rules for developing speech materials and the unique linguistic characteristics of Mandarin, we designed and digitally recorded a set of materials comprised of seven lists of monosyllabic words, nine lists of disyllabic words, and fifteen lists of sentences with a high degree of subject familiarity. The CMSTS was developed with Visual Studio 2008, Access 2003 and DirectX 9. The system included five functions: listener management, a speech test, list management, data management, and system settings. We used the system to measure the speech recognition threshold (SRT) of 76 participants with normal hearing (age range: 20-28 years), and measured performance-intensity functions (PI) for all stimuli. The SRT results were in accord with pure-tone results obtained by pure-tone audiometry. In a speech recognition score (SRS) test, changing the presentation level had the strongest effect on sentence recognition, followed by the presence of disyllabic words. Monosyllabic words were least affected by changes in presentation level. The slopes of the linear portion of the PI using the system were in accord with the findings of previous studies using audiometers and CDs with similar materials. The CMSTS has sufficient sensitivity, and can facilitate the wider use of speech audiometry in Chinese audiology clinics. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Evaluation of pure tone audiometry and impedance screening in infant schoolchildren.

    Science.gov (United States)

    Holtby, I; Forster, D P

    1992-01-01

    STUDY OBJECTIVE--The aims were (1) to evaluate impedance measurements against pure tone audiometry as a screening method for the detection of middle ear changes associated with hearing loss in infant school children; (2) to estimate the costs of the health authority of each method. DESIGN--The study involved two stage screening in which both methods were offered, pure tone audiometry being carried out by school nurses and impedance screening by a doctor. SETTING--18 infant or primary schools in Langbaurgh, Cleveland, UK. PARTICIPANTS--610 previously unscreened infant school children took part in the study. MEASUREMENTS AND MAIN RESULTS--Main outcome measures were the sensitivity, specificity, and predictive value of each screening method, using clinical assessment and action as the validating technique. The sensitivity and the predictive value of a positive test in two stage impedance screening was markedly superior to that of pure tone audiometry. The specificity was similar using the two methods. In addition the impedance methods was more rapid and estimated to consume less resource as a screening procedure than pure tone audiometry. CONCLUSIONS--The superiority of the use of impedance screening established in this study should be confirmed in a subsequent audit carried out purely by school nurses. PMID:1573355

  18. Distribution Characteristics of Air-Bone Gaps – Evidence of Bias in Manual Audiometry

    Science.gov (United States)

    Margolis, Robert H.; Wilson, Richard H.; Popelka, Gerald R.; Eikelboom, Robert H.; Swanepoel, De Wet; Saly, George L.

    2015-01-01

    Objective Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps. PMID:26627469

  19. Long-term measurements using home audiometry with Békésy's technique.

    Science.gov (United States)

    Brännström, K Jonas; Grenner, Jan

    2017-03-01

    To examine the efficacy of fixed-frequency Békésy's home audiometry to assess hearing fluctuation and treatment outcomes in patients with subjectively fluctuating hearing loss. SMAPH, a software audiometry program for Windows, was installed and calibrated on laptop computers. Békésy's audiometry was carried out daily in the patients' homes, using sound-attenuating earphones. Seventeen patients with previously or currently subjectively fluctuating hearing loss. Five patients received of treatment for their conditions during the measurement period. Measurement periods ranged from 6 to 60 days. Varying degrees of compliance were seen, some patients measuring less than 50% of the days, others measuring every day. Based on their long-term measurements the patients were classified into three groups: patients with stable recordings, with fluctuating low-frequency hearing loss, or with fluctuating high-frequency hearing loss. In the patients with stable recordings, significant test-retest differences were seen below 10 dB at frequencies 0.125-8 kHz. Home audiometry with Békésy's technique can be used to evaluate disease activity and to monitor hearing results after therapy.

  20. The role of high-frequency audiometry in early detection of ototoxicity

    NARCIS (Netherlands)

    Dreschler, W. A.; vd Hulst, R. J.; Tange, R. A.; Urbanus, N. A.

    1985-01-01

    Ototoxicity is one of the unwanted side-effects of a number of medical drugs. As ototoxicity appears to be most pronounced in the higher frequencies, it can be assessed at an earlier stage by using high-frequency audiometry from 8 to 20 kHz. We have investigated the precision of these measurements.

  1. Overall versus individual changes for otoacoustic emissions and audiometry in a noise-exposed cohort.

    Science.gov (United States)

    Helleman, Hiske W; Dreschler, Wouter A

    2012-05-01

    For a noise-exposed group of workers, group-averaged and individual changes were compared for pure-tone audiometry, transient-evoked otoacoustic emissions (TEOAEs), and distortion product otoacoustic emissions (DPOAEs) in order to see if they exhibit the same pattern in time. Baseline and 17-months follow-up hearing status was examined with pure-tone audiometry, TEOAEs, and DPOAEs. A total of 233 noise-exposed employees were measured, while 60 subjects from this group contributed to test-retest reliability measures. Group-averaged changes and individual shifts followed similar patterns: decreases for audiometry at 6-8 kHz and DPOAE at 1.5 kHz, and enhancements for DPOAE at 3 kHz. TEOAEs showed an overall deterioration while both individual deteriorations and enhancements were larger than chance. DPOAE at 6 kHz showed the largest group-averaged change, while the number of individual shifts was not significant. There were no clear relations between changes in audiometry and changes in OAE. Significant individual OAE changes do not necessarily follow the same pattern as the group-averaged results. This limits the applicability of OAE testing for the monitoring of individual subjects. Furthermore, hearing deterioration might manifest itself in a local enhancement of otoacoustic emissions and not only in the form of decreases in amplitude.

  2. High frequency audiometry in prospective clinical research of ototoxicity due to platinum derivatives

    NARCIS (Netherlands)

    van der Hulst, R. J.; Dreschler, W. A.; Urbanus, N. A.

    1988-01-01

    The results of clinical use of routine high frequency audiometry in monitoring the ototoxic side effects of platinum and its derivatives are described in this prospective study. After demonstrating the reproducibility of the technique, we discuss the first results of an analysis of ototoxic side

  3. Distribution Characteristics of Air-Bone Gaps: Evidence of Bias in Manual Audiometry.

    Science.gov (United States)

    Margolis, Robert H; Wilson, Richard H; Popelka, Gerald R; Eikelboom, Robert H; Swanepoel, De Wet; Saly, George L

    2016-01-01

    Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient's hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps.

  4. Pre- and postoperative high-frequency audiometry in otosclerosis. A study of 53 cases

    NARCIS (Netherlands)

    Tange, R. A.; Dreschler, W. A.

    1990-01-01

    A study was carried out to evaluate the results of stapes surgery in 53 cases of otosclerosis. The hearing function was measured pre- and postoperatively by means of conventional and high-frequency audiometry (Demlar 20K). The operative findings of the gradation of otosclerosis were compared with

  5. Identification Audiometry in an Institutionalized Severely and Profoundly Mentally Retarded Population.

    Science.gov (United States)

    Moore, Ernest J.; And Others

    An audiometric screening survey was conducted on a severely and profoundly mentally retarded population using noise-makers and pure tone audiometry. Of those tested with noise-makers, 83% gave an identifiable response to sound, 7% did not respond, and 10% were considered difficult-to-test. By contrast, 4% passed, 2% failed, and 94% were…

  6. Overall versus individual changes for otoacoustic emissions and audiometry in a noise-exposed cohort

    NARCIS (Netherlands)

    Helleman, Hiske W.; Dreschler, Wouter A.

    2012-01-01

    Objective: For a noise-exposed group of workers, group-averaged and individual changes were compared for pure-tone audiometry, transient-evoked otoacoustic emissions (TEOAEs), and distortion product otoacoustic emissions (DPOAEs) in order to see if they exhibit the same pattern in time. Design:

  7. Diagnostic Hearing Assessment in Schools: Validity and Time Efficiency of Automated Audiometry.

    Science.gov (United States)

    Mahomed-Asmail, Faheema; Swanepoel, De Wet; Eikelboom, Robert H

    2016-01-01

    Poor follow-up compliance from school-based hearing screening typically undermines the efficacy of school-based hearing screening programs. Onsite diagnostic audiometry with automation may reduce false positives and ensure directed referrals. To investigate the validity and time efficiency of automated diagnostic air- and bone-conduction audiometry for children in a natural school environment following hearing screening. A within-subject repeated measures design was employed to compare air- and bone-conduction pure-tone thresholds (0.5-4 kHz), measured by manual and automated pure-tone audiometry. Sixty-two children, 25 males and 37 females, with an average age of 8 yr (standard deviation [SD] = 0.92; range = 6-10 yr) were recruited for this study. The participants included 30 children who failed on a hearing screening and 32 children who passed a hearing screening. Threshold comparisons were made for air- and bone-conduction thresholds across ears tested with manual and automated audiometry. To avoid a floor effect thresholds of 15 dB HL were excluded in analyses. The Wilcoxon signed ranked test was used to compare threshold correspondence for manual and automated thresholds and the paired samples t-test was used to compare test time. Statistical significance was set as p ≤ 0.05. 85.7% of air-conduction thresholds and 44.6% of bone-conduction thresholds corresponded within the normal range (15 dB HL) for manual and automated audiometry. Both manual and automated audiometry air- and bone-conduction thresholds exceeded 15 dB HL in 9.9% and 34.0% of thresholds, respectively. For these thresholds, average absolute differences for air- and bone-conduction thresholds were 6.3 (SD = 8.3) and 2.2 dB (SD = 3.6) and they corresponded within 10 dB across frequencies in 87.7% and 100.0%, respectively. There was no significant difference between manual and automated air- and bone-conduction across frequencies for these thresholds. Using onsite automated diagnostic audiometry

  8. Validity of diagnostic computer-based air and forehead bone conduction audiometry.

    Science.gov (United States)

    Swanepoel, De Wet; Biagio, Leigh

    2011-04-01

    Computer-based audiometry allows for novel applications, including remote testing and automation, that may improve the accessibility and efficiency of hearing assessment in various clinical and occupational health settings. This study describes the validity of computer-based, diagnostic air and forehead bone conduction audiometry when compared wtih conventional industry standard audiometry in a sound booth environment. A sample of 30 subjects (19 to 77 years of age) was assessed with computer-based (KUDUwave 5000) and industry standard conventional audiometers (GSI 61) to compare air and bone conduction thresholds and test-retest reliability. Air conduction thresholds for the two audiometers corresponded within 5 dB or less in more than 90% of instances, with an average absolute difference of 3.5 dB (3.8 SD) and a 95% confidence interval of 2.6 to 4.5 dB. Bone conduction thresholds for the two audiometers corresponded within 10 dB or less in 92% of instances, with an average absolute difference of 4.9 dB (4.9 SD) and a 95% confidence interval of 3.6 to 6.1 dB. The average absolute test-retest threshold difference for bone conduction on the industry standard audiometer was 5.1 dB (5.3 SD) and for the computer-based audiometer 7.1 dB (6.4 SD). Computer-based audiometry provided air and bone conduction thresholds within the test-retest reliability limits of industry standard audiometry.

  9. High-frequency Audiometry Hearing on Monitoring of Individuals Exposed to Occupational Noise: A Systematic Review

    Science.gov (United States)

    Antonioli, Cleonice Aparecida Silva; Momensohn-Santos, Teresa Maria; Benaglia, Tatiana Aparecida Silva

    2015-01-01

    Introduction  The literature reports on high-frequency audiometry as one of the exams used on hearing monitoring of individuals exposed to high sound pressure in their work environment, due to the method́s greater sensitivity in early identification of hearing loss caused by noise. The frequencies that compose the exam are generally between 9 KHz and 20KHz, depending on the equipment. Objective  This study aims to perform a retrospective and secondary systematic revision of publications on high-frequency audiometry on hearing monitoring of individuals exposed to occupational noise. Data Synthesis  This systematic revision followed the methodology proposed in the Cochrane Handbook, focusing on the question: “Is High-frequency Audiometry more sensitive than Conventional Audiometry in the screening of early hearing loss individuals exposed to occupational noise?” The search was based on PubMed data, Base, Web of Science (Capes), Biblioteca Virtual em Saúde (BVS), and in the references cited in identified and selected articles. The search resulted in 6059 articles in total. Of these, only six studies were compatible with the criteria proposed in this study. Conclusion  The performed meta-analysis does not definitively answer the study's proposed question. It indicates that the 16 KHz high frequency audiometry (HFA) frequency is sensitive in early identification of hearing loss in the control group (medium difference (MD = 8.33)), as well as the 4 KHz frequency (CA), this one being a little less expressive (MD = 5.72). Thus, others studies are necessary to confirm the HFA importance for the early screening of hearing loss on individuals exposed to noise at the workplace. PMID:27413413

  10. High-frequency audiometry reveals high prevalence of aminoglycoside ototoxicity in children with cystic fibrosis.

    Science.gov (United States)

    Al-Malky, Ghada; Dawson, Sally J; Sirimanna, Tony; Bagkeris, Emmanouil; Suri, Ranjan

    2015-03-01

    Intravenous aminoglycoside (IV AG) antibiotics, widely used in patients with cystic fibrosis (CF), are known to have ototoxic complications. Despite this, audiological monitoring is not commonly performed and if performed, uses only standard pure-tone audiometry (PTA). The aim of this study was to investigate ototoxicity in CF children, to determine the most appropriate audiological tests and to identify possible risk factors. Auditory assessment was performed in CF children using standard pure tone audiometry (PTA), extended high-frequency (EHF) audiometry and distortion-product otoacoustic emissions (DPOAE). 70 CF children, mean (SD) age 10.7 (3.5) years, were recruited. Of the 63 children who received IV AG, 15 (24%) children had ototoxicity detected by EHF audiometry and DPOAE. Standard PTA only detected ototoxicity in 13 children. Eleven of these children had received at least 10 courses of IV AG courses. A 25 to 85 dBHL hearing loss (mean±SD: 57.5±25.7 dBHL) across all EHF frequencies and a significant drop in DPOAE amplitudes at frequencies 4 to 8 kHz were detected. However, standard PTA detected a significant hearing loss (>20 dBHL) only at 8 kHz in 5 of these 15 children and none in 2 subjects who had significantly elevated EHF thresholds. The number of courses of IV AG received, age and lower lung function were shown to be risk factors for ototoxicity. CF children who had received at least 10 courses of IV AG had a higher risk of ototoxicity. EHF audiometry identified 2 more children with ototoxicity than standard PTA and depending on facilities available, should be the test of choice for detecting ototoxicity in children with CF receiving IV AG. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  11. High-frequency Audiometry Hearing on Monitoring of Individuals Exposed to Occupational Noise: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Antonioli, Cleonice Aparecida Silva

    2015-12-01

    Full Text Available Introduction The literature reports on high-frequency audiometry as one of the exams used on hearing monitoring of individuals exposed to high sound pressure in their work environment, due to the method́s greater sensitivity in early identification of hearing loss caused by noise. The frequencies that compose the exam are generally between 9 KHz and 20KHz, depending on the equipment. Objective This study aims to perform a retrospective and secondary systematic revision of publications on high-frequency audiometry on hearing monitoring of individuals exposed to occupational noise. Data Synthesis This systematic revision followed the methodology proposed in the Cochrane Handbook, focusing on the question: “Is High-frequency Audiometry more sensitive than Conventional Audiometry in the screening of early hearing loss individuals exposed to occupational noise?” The search was based on PubMed data, Base, Web of Science (Capes, Biblioteca Virtual em Saúde (BVS, and in the references cited in identified and selected articles. The search resulted in 6059 articles in total. Of these, only six studies were compatible with the criteria proposed in this study. Conclusion The performed meta-analysis does not definitively answer the study's proposed question. It indicates that the 16 KHz high frequency audiometry (HFA frequency is sensitive in early identification of hearing loss in the control group (medium difference (MD = 8.33, as well as the 4 KHz frequency (CA, this one being a little less expressive (MD = 5.72. Thus, others studies are necessary to confirm the HFA importance for the early screening of hearing loss on individuals exposed to noise at the workplace.

  12. High-frequency Audiometry Hearing on Monitoring of Individuals Exposed to Occupational Noise: A Systematic Review.

    Science.gov (United States)

    Antonioli, Cleonice Aparecida Silva; Momensohn-Santos, Teresa Maria; Benaglia, Tatiana Aparecida Silva

    2016-07-01

    The literature reports on high-frequency audiometry as one of the exams used on hearing monitoring of individuals exposed to high sound pressure in their work environment, due to the method́s greater sensitivity in early identification of hearing loss caused by noise. The frequencies that compose the exam are generally between 9 KHz and 20KHz, depending on the equipment. This study aims to perform a retrospective and secondary systematic revision of publications on high-frequency audiometry on hearing monitoring of individuals exposed to occupational noise. This systematic revision followed the methodology proposed in the Cochrane Handbook, focusing on the question: "Is High-frequency Audiometry more sensitive than Conventional Audiometry in the screening of early hearing loss individuals exposed to occupational noise?" The search was based on PubMed data, Base, Web of Science (Capes), Biblioteca Virtual em Saúde (BVS), and in the references cited in identified and selected articles. The search resulted in 6059 articles in total. Of these, only six studies were compatible with the criteria proposed in this study. The performed meta-analysis does not definitively answer the study's proposed question. It indicates that the 16 KHz high frequency audiometry (HFA) frequency is sensitive in early identification of hearing loss in the control group (medium difference (MD = 8.33)), as well as the 4 KHz frequency (CA), this one being a little less expressive (MD = 5.72). Thus, others studies are necessary to confirm the HFA importance for the early screening of hearing loss on individuals exposed to noise at the workplace.

  13. [Speech perception with hearing aids in comparison to pure-tone hearing loss].

    Science.gov (United States)

    Hoppe, U; Hast, A; Hocke, T

    2014-06-01

    Speech perception is the most important social task of the auditory system. Consequently, speech audiometry is essential to evaluate hearing aid benefit. The aim of the study was to describe the correlation between pure-tone hearing loss and speech perception. In particular, pure-tone audiogram, speech audiogram, and speech perception with hearing aids were compared. In a retrospective study, 102 hearing aid users with bilateral sensorineural hearing loss were included. Pure-tone loss (PTA) was correlated to monosyllabic perception at 65 dB with hearing aid and with maximum monosyllabic perception with headphones. Speech perception as a function of hearing loss can be represented by a sigmoid function. However, for higher degrees of hearing loss, substantial deviations are observed. Maximum monosyllabic perception with headphones is usually not achieved with hearing aids at standard speech levels of 65 dB. For larger groups, average pure-tone hearing loss and speech perception correlate significantly. However, prognosis for individuals is not possible. In particular for higher degrees of hearing loss substantial deviations could be observed. Speech performance with hearing aids cannot be predicted sufficiently from speech audiograms. Above the age of 80, speech perception is significantly worse.

  14. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  15. Pure tone audiometry and impedance screening of school entrant children by nurses: evaluation in a practical setting.

    Science.gov (United States)

    Holtby, I; Forster, D P; Kumar, U

    1997-01-01

    BACKGROUND: Screening for hearing loss in English children at entry to school (age 5-6 years) is usually by pure tone audiometry sweep undertaken by school nurses. This study aimed to compare the validity and screening rates of pure tone audiometry with impedance screening in these children. METHODS: Two stage pure tone audiometry and impedance methods of screening were compared in 610 school entry children from 19 infant schools in north east England. Both procedures were completed by school nurses. The results of screening were validated against subsequent clinical assessment, including otological examination and actions taken by an independent assessor. RESULTS: Both methods produced broadly similar validation indices after two stages of screening: sensitivity was 74.4% for both methods; specificity was 92.1% and 90.0%; and predicted values of a positive test 43.2% and 37.6% respectively for pure tone audiometry and impedance methods. Single stage screening in both methods produced higher sensitivity but lower specificity and predictive values of a positive test than two stage screening. Screening rates were appreciably higher with impedance methods than with pure tone audiometry. CONCLUSIONS: In choosing the method to be used, it must be borne in mind that the impedance method is technically more efficient but takes longer than pure tone audiometry screening. However, the latter method allows opportunity for other health inquiries in these children. PMID:9519138

  16. The new age of play audiometry: prospective validation testing of an iPad-based play audiometer.

    Science.gov (United States)

    Yeung, Jeffrey; Javidnia, Hedyeh; Heley, Sophie; Beauregard, Yves; Champagne, Sandra; Bromwich, Matthew

    2013-03-11

    The timely diagnosis of hearing loss in the pediatric population has significant implications for a child's development. However, audiological evaluation in this population poses unique challenges due to difficulties with patient cooperation. Though specialized adaptations exist (such as conditioned play audiometry), these methods can be time consuming and costly. The objective of this study was to validate an iPad-based play audiometer that addresses the shortcomings of existing audiometry. We designed a novel, interactive game for the Apple® iPad® that tests pure tone thresholds. In a prospective, randomized study, the efficacy of this tool was compared to standard play audiometry. 85 consecutive patients presenting to the Audiology Clinic at the Children's Hospital of Eastern Ontario (ages 3 and older) were recruited into this study. Their hearing was evaluated using both tablet and traditional play audiometry. Warble-tone thresholds obtained by both tablet and traditional audiometry. The majority of children in this age group were capable of completing an audiologic assessment using the tablet computer. The data demonstrate no statistically significant difference between warble-tone thresholds obtained by tablet and traditional audiometry (p=0.29). Moreover, the tablet audiometer demonstrates strong sensitivity (93.3%), specificity (94.5%) and negative predictive value (98.1%). The tablet audiometer is a valid and sensitive instrument for screening and assessment of warble-tone thresholds in children.

  17. Extended high frequency audiometry can diagnose sub-clinic involvement in a seemingly normal hearing systemic lupus erythematosus population.

    Science.gov (United States)

    Lasso de la Vega, Mar; Villarreal, Ithzel María; López Moya, Julio; García-Berrocal, José Ramón

    2017-02-01

    Sensorineural hearing loss must be considered within the clinical picture of systemic lupus erythematosus. The results confirm the usefulness of extended high-frequency audiometry in the audiologic testing of these patients, enabling the possibility of modifying or applying a preventive treatment for a possible hearing loss. Hearing involvement is usually under-diagnosed with routine auditory examination. This study proposes the use of extended high-frequency audiometry to achieve a correct detection of a possible asymptomatic hypoacusis in early stages of the disease. The aim of this study is to analyze the hearing levels in extended high-frequencies in these patients and to correlate the hearing loss with the severity of the disease and the immunological parameters. A descriptive cross-sectional study was performed. Fifty-five patients with systemic lupus erythematosus were included in the study. The control group consisted of 71 patients paired by age and sex with the study population. Both a pure tone audiometry and an extended high-frequency audiometry (8-18 KHz) were performed. In total, 70% were diagnosed with sensorineural hearing loss with extended high-frequency audiometry, overcoming the results obtained with pure tone audiometry (30.9%). Statistically significant correlations were found within the patients regarding sensorineural hearing loss related with age, disease activity and cryoglobulinemia.

  18. Cisplatin-based chemotherapy: Add high-frequency audiometry in the regimen.

    Science.gov (United States)

    Arora, R; Thakur, J S; Azad, R K; Mohindroo, N K; Sharma, D R; Seam, R K

    2009-01-01

    Cisplatin-induced ototoxicity shows high interindividual variability and is often accompanied by transient or permanent tinnitus. It is not possible to identify the susceptible individuals before commencement of the treatment. We conducted a prospective, randomized and observational study in a tertiary care centre and evaluated the effects of different doses of cisplatin on hearing. Fifty-seven patients scheduled for cisplatin-based chemotherapy were included in the study. All patients were divided into three groups depending on the dose of cisplatin infused in 3 weeks. The subjective hearing loss was found in seven patients, while six patients had tinnitus during the chemotherapy. The hearing loss was sensorineural, dose dependent, symmetrical, bilateral and irreversible. Higher frequencies were first to be affected in cisplatin chemotherapy. As use of high-frequency audiometry is still limited in research work only, we need a strict protocol of adding high-frequency audiometry in the cisplatin-based chemotherapy regimen.

  19. High-frequency audiometry in normal hearing military firemen exposed to noise.

    Science.gov (United States)

    Rocha, Rita Leniza Oliveira da; Atherino, Ciríaco Cristóvão Tavares; Frota, Silvana Maria Monte Coelho

    2010-01-01

    The study of high frequencies has proven its importance for detecting inner ear damage. In some cases, conventional frequencies are not sensitive enough to pick up early changes to the inner ear. To analyze the results of threshold high frequency analysis of individuals exposed to noise with normal conventional audiometry. This was a retrospective cross-sectional cohort study, in which we studied 47 firefighters of the Fire Department of Rio de Janeiro, based on Santos Dumont airport and 33 military men without noise exposure. They were broken down into two age groups: 30-39years and 40-49years. The high frequencies were studied immediately after conventional audiometry. The results were most significant in the 40 to 49 years of age range, where the experimental group showed significantly higher threshold values than the control group 14000Hz (p = 0.008) and 16,000Hz (p = 0.0001). We concluded that noise interfered with high frequency thresholds, where all the mean values found in the experimental group were higher than those in the control group. We suggest that these data reinforce the importance of studying high frequencies, even with normal conventional audiometry in the early detection of noise-induced hearing loss.

  20. Neural entrainment to speech modulates speech intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Başkent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  1. Development of Mandarin monosyllabic speech test materials in China.

    Science.gov (United States)

    Han, Demin; Wang, Shuo; Zhang, Hua; Chen, Jing; Jiang, Wenbo; Mannell, Robert; Newall, Philip; Zhang, Luo

    2009-05-01

    In this study, monosyllabic Mandarin speech test materials (MSTMs) were developed for use in word recognition tests for speech audiometry in Chinese audiology clinics. Mandarin monosyllabic materials with high familiarity were designed with regard to phonological balance and recorded digitally with a male voice. Inter-list equivalence of difficulty was evaluated for a group of 60 subjects (aged 18-25 years) with normal hearing. Seven lists with 50 words each were found to be equivalent. These seven equivalent lists were used to measure performance-intensity (PI) functions for a group of 32 subjects with normal hearing and a group of 40 subjects with mild to moderate sensorineural hearing loss. The mean slope of PI function was found to be 4.1%/dB and 2.7%/dB, respectively. The seven lists of Mandarin monosyllabic materials were found to have sufficient reliability and validity to be used in clinical situations.

  2. Hearing Tests Based on Biologically Calibrated Mobile Devices: Comparison With Pure-Tone Audiometry.

    Science.gov (United States)

    Masalski, Marcin; Grysiński, Tomasz; Kręcicki, Tomasz

    2018-01-10

    Hearing screening tests based on pure-tone audiometry may be conducted on mobile devices, provided that the devices are specially calibrated for the purpose. Calibration consists of determining the reference sound level and can be performed in relation to the hearing threshold of normal-hearing persons. In the case of devices provided by the manufacturer, together with bundled headphones, the reference sound level can be calculated once for all devices of the same model. This study aimed to compare the hearing threshold measured by a mobile device that was calibrated using a model-specific, biologically determined reference sound level with the hearing threshold obtained in pure-tone audiometry. Trial participants were recruited offline using face-to-face prompting from among Otolaryngology Clinic patients, who own Android-based mobile devices with bundled headphones. The hearing threshold was obtained on a mobile device by means of an open access app, Hearing Test, with incorporated model-specific reference sound levels. These reference sound levels were previously determined in uncontrolled conditions in relation to the hearing threshold of normal-hearing persons. An audiologist-assisted self-measurement was conducted by the participants in a sound booth, and it involved determining the lowest audible sound generated by the device within the frequency range of 250 Hz to 8 kHz. The results were compared with pure-tone audiometry. A total of 70 subjects, 34 men and 36 women, aged 18-71 years (mean 36, standard deviation [SD] 11) participated in the trial. The hearing threshold obtained on mobile devices was significantly different from the one determined by pure-tone audiometry with a mean difference of 2.6 dB (95% CI 2.0-3.1) and SD of 8.3 dB (95% CI 7.9-8.7). The number of differences not greater than 10 dB reached 89% (95% CI 88-91), whereas the mean absolute difference was obtained at 6.5 dB (95% CI 6.2-6.9). Sensitivity and specificity for a mobile

  3. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  4. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  5. Active noise reduction audiometry: a prospective analysis of a new approach to noise management in audiometric testing.

    Science.gov (United States)

    Bromwich, Matthew A; Parsa, Vijay; Lanthier, Nicole; Yoo, John; Parnes, Lorne S

    2008-01-01

    To develop a new method of screening audiometry that reduces the adverse effects of low frequency background noise by using active noise reduction (ANR) headphone technology. Prospective testing within an anechoic chamber evaluated the physical properties of ANR headphones. A prospective clinical crossover study compared standard audiometry with ANR headphone audiometry. Bose Aviation X circum-aural ANR headphones were tested for both active and passive attenuation properties in a hemi-anechoic chamber using a head and torso simulator. Thirty-seven otology clinic patients then underwent standard audiometry and ANR audiometry, which was performed in a 30- and/or 40-dB sound field. Objective ANR headphone attenuation levels of up to 12 dB were achieved at frequencies below 2,000 Hz. In standard audiometric testing, 40 dB of narrow-band background noise decreased patient pure tone thresholds by 24 dB at 250 Hz. The use of ANR technology provided 12 dB of additional attenuation. This resulted in a significant improvement in test results despite the 40 dB of background noise (P = resulted in a significant improvement in results (P = results were identical to those obtained in a quiet sound booth. Despite a 30-dB sound field, ANR audiometry can produce an audiogram identical to that obtained in a double-walled sound booth. ANR headphone audiometry improves the sensitivity of audiometric screening for mild low-frequency hearing loss. This technology may have important applications for screening in schools, industry, and community practices.

  6. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  7. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  8. Automated screening audiometry in the digital age: exploring uhear™ and its use in a resource-stricken developing country.

    Science.gov (United States)

    Khoza-Shangase, Katijah; Kassner, Lisa

    2013-01-01

    The current study aimed to determine the accuracy of UHear™, a downloadable audiometer on to an iPod Touch©, when compared with conventional audiometry. Participants were enrolled primary school scholars. A total number of eighty-six participants (172 ears) were included. Of these eighty-six participants, forty-four were female and forty-two were male; with the age ranging from 8 years to 10 years (mean age, 9.0 years). Each participant underwent two audiological screening evaluations; one by means of conventional audiometry and the other by means of UHear™. Otoscopy and tympanometry was performed on each participant to determine status of their outer and middle ear before each participant undergoing pure tone air conduction screening by means of conventional audiometer and UHear™. The lowest audible hearing thresholds from each participant were obtained at conventional frequencies. Using the Paired t-test, it was determined that there was a significant statistical difference between hearing screening thresholds obtained from conventional audiometry and UHear™. The screening thresholds obtained from UHear™ were significantly elevated (worse) in comparison to conventional audiometry. The difference in thresholds may be attributed to differences in transducers used, ambient noise levels and lack of calibration of UHear™. The UHear™ is not as accurate as conventional audiometry in determining hearing thresholds during screening of school-aged children. Caution needs to be exercised when using such measures and research evidence needs to be established before they can be endorsed and used with the general public.

  9. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  10. Speech dynamics

    NARCIS (Netherlands)

    Pols, L.C.W.

    2011-01-01

    In order for speech to be informative and communicative, segmental and suprasegmental variation is mandatory. Only this leads to meaningful words and sentences. The building blocks are no stable entities put next to each other (like beads on a string or like printed text), but there are gradual

  11. Speech Enhancement

    DEFF Research Database (Denmark)

    Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    and their performance bounded and assessed in terms of noise reduction and speech distortion. The book shows how various filter designs can be obtained in this framework, including the maximum SNR, Wiener, LCMV, and MVDR filters, and how these can be applied in various contexts, like in single-channel and multichannel...

  12. Diagnostic pure-tone audiometry in schools: mobile testing without a sound-treated environment.

    Science.gov (United States)

    Swanepoel, De Wet; Maclennan-Smith, Felicity; Hall, James W

    2013-01-01

    To validate diagnostic pure-tone audiometry in schools without a sound-treated environment using an audiometer that incorporates insert earphones covered by circumaural earcups and real-time environmental noise monitoring. A within-subject repeated measures design was employed to compare air (250 to 8000 Hz) and bone (250 to 4000 Hz) conduction pure-tone thresholds measured in natural school environments with thresholds measured in a sound-treated booth. 149 children (54% female) with an average age of 6.9 yr (SD = 0.6; range = 5-8). Average difference between the booth and natural environment thresholds was 0.0 dB (SD = 3.6) for air conduction and 0.1 dB (SD = 3.1) for bone conduction. Average absolute difference between the booth and natural environment was 2.1 dB (SD = 2.9) for air conduction and 1.6 dB (SD = 2.7) for bone conduction. Almost all air- (96%) and bone-conduction (97%) threshold comparisons between the natural and booth test environments were within 0 to 5 dB. No statistically significant differences between thresholds recorded in the natural and booth environments for air- and bone-conduction audiometry were found (p > 0.01). Diagnostic air- and bone-conduction audiometry in schools, without a sound-treated room, is possible with sufficient earphone attenuation and real-time monitoring of environmental noise. Audiological diagnosis on-site for school screening may address concerns of false-positive referrals and poor follow-up compliance and allow for direct referral to audiological and/or medical intervention. American Academy of Audiology.

  13. Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.

    Science.gov (United States)

    Masalski, Marcin; Kręcicki, Tomasz

    2013-04-12

    Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.

  14. [Subjective intraoperative hearing self-assessment in patients after stapedotomy comparing to postoperative pure-tone audiometry].

    Science.gov (United States)

    Jankowski, Andrzej; Durko, Tomasz; Pajor, Anna; Durko, Marcin

    2010-01-01

    In otosclerosis patients the most common procedure followed at Otosurgical Dept. Medical University of Lodz is stapedotomy with insertion of teflon-piston prosthesis. When surgery is finished a whisper hearing test is done from the 1 meter distance for brief intraoperative hearing improvement assessment. There is a number of patient who report subjective intraoperative hearing improvement which is not confirmed by postoperative pure-tone audiometry (2-3rd post-op day). was the analysis of factors influencing stapedotomy (teflon-piston procedure) patients in which intraoperative hearing improvement was not confirmed by postoperative pure-tone audiometry. Retrospective analysis of postoperative hearing results in patients who underwent stapedotomy (teflon-piston operation) at the Otosurgical Dept. Medical University of Lodz from 2005 to 2009. The total number of 142 stapedotomies were analyzed. In 27 ears no hearing improvement was reported (19.1%). Among them 18 reported intraoperative hearing improvement not confirmed on postoperative pure-tone audiometry and 9 cases intraopertively reported no hearing improvement. Patients in Group A (hearing improvement 1-2 month post stapedotomy)--12 cases (44.4%) with hearing improvement confirmed by pure-tone audiometry and Group B--5 cases (55.6%) in which no sign of hearing improvement in pure-tone audiometry was reported. In patients who intraopertively reported hearing improvement not supported by the pure-tone audiometry the following factors seem to play a vital role: a) strong suggestion and willingness of improvement after surgical treatment, b) specific condition of the whisper hearing test at the operating room environment, c) patient's stress during the surgery and strong fear of possible revision surgery.

  15. Hearing Handicap and Speech Recognition Correlate With Self-Reported Listening Effort and Fatigue.

    Science.gov (United States)

    Alhanbali, Sara; Dawes, Piers; Lloyd, Simon; Munro, Kevin J

    2017-10-31

    To investigate the correlations between hearing handicap, speech recognition, listening effort, and fatigue. Eighty-four adults with hearing loss (65 to 85 years) completed three self-report questionnaires: the Fatigue Assessment Scale, the Effort Assessment Scale, and the Hearing Handicap Inventory for Elderly. Audiometric assessment included pure-tone audiometry and speech recognition in noise. There was a significant positive correlation between handicap and fatigue (r = 0.39, p handicap and effort (r = 0.73, p handicap and speech recognition both correlate with self-reported listening effort and fatigue, which is consistent with a model of listening effort and fatigue where perceived difficulty is related to sustained effort and fatigue for unrewarding tasks over which the listener has low control. A clinical implication is that encouraging clients to recognize and focus on the pleasure and positive experiences of listening may result in greater satisfaction and benefit from hearing aid use.

  16. Follow-up audiometry after bilateral myringotomy and tympanostomy tube insertion.

    Science.gov (United States)

    Hu, Shirley; Patel, Neha A; Shinhar, Shai

    2015-12-01

    There are no evidence-based guidelines regarding timing of postoperative audiometric follow-up for children undergoing tympanostomy tube insertion. Given the variability of follow-up among physicians, we attempt to guide the timing of postoperative audiograms using objective data. Retrospective chart review. All pediatric patients undergoing primary bilateral myringotomy and tympanostomy tube insertion for otitis media with effusion who had audiometric data available at two follow-up times were identified from 2014. Patients were classified according to the type of audiometry performed and were further categorized into those who had tympanostomy tube insertion only and those who had concurrent adenotonsillectomies. 34 patients were included in the study. Among patients assessed by sound field audiometry, the mean sound field threshold value was 29.2dB preoperatively and improved to 21dB 2 weeks postoperatively and 17.9dB 6 to 10 weeks postoperatively. The difference between the two postoperative means was significant (paudiometry, the mean preoperative air-bone gap was 20.1dB; this improved to 10dB at the first postoperative visit and 7.3dB at the second visit. The difference between the two means was significant (paudiometry underestimates the degree of hearing improvement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Comparative study between pure tone audiometry and auditory steady-state responses in normal hearing subjects.

    Science.gov (United States)

    Beck, Roberto Miquelino de Oliveira; Ramos, Bernardo Faria; Grasel, Signe Schuster; Ramos, Henrique Faria; Moraes, Maria Flávia Bonadia B de; Almeida, Edigar Rezende de; Bento, Ricardo Ferreira

    2014-01-01

    Auditory steady-state responses (ASSR) are an important tool to detect objectively frequency-specific hearing thresholds. Pure-tone audiometry is the gold-standard for hearing evaluation, although sometimes it may be inconclusive, especially in children and uncooperative adults. Compare pure tone thresholds (PT) with ASSR thresholds in normal hearing subjects. In this prospective cross-sectional study we included 26 adults (n = 52 ears) of both genders, without any hearing complaints or otologic diseases and normal puretone thresholds. All subjects had clinical history, otomicroscopy, audiometry and immitance measurements. This evaluation was followed by the ASSR test. The mean pure-tone and ASSR thresholds for each frequency were calculated. The mean difference between PTand ASSR thresholdswas 7,12 for 500 Hz, 7,6 for 1000 Hz, 8,27 for 2000 Hz and 9,71 dB for 4000 Hz. There were no difference between PT and ASSR means at either frequency. ASSR thresholds were comparable to pure-tone thresholds in normal hearing adults. Nevertheless it should not be used as the only method of hearing evaluation.

  18. Pure-tone audiometry outside a sound booth using earphone attentuation, integrated noise monitoring, and automation.

    Science.gov (United States)

    Swanepoel, De Wet; Matthysen, Cornelia; Eikelboom, Robert H; Clark, Jackie L; Hall, James W

    2015-01-01

    Accessibility of audiometry is hindered by the cost of sound booths and shortage of hearing health personnel. This study investigated the validity of an automated mobile diagnostic audiometer with increased attenuation and real-time noise monitoring for clinical testing outside a sound booth. Attenuation characteristics and reference ambient noise levels for the computer-based audiometer (KUDUwave) was evaluated alongside the validity of environmental noise monitoring. Clinical validity was determined by comparing air- and bone-conduction thresholds obtained inside and outside the sound booth (23 subjects). Twenty-three normal-hearing subjects (age range, 20-75 years; average age 35.5) and a sub group of 11 subjects to establish test-retest reliability. Improved passive attenuation and valid environmental noise monitoring was demonstrated. Clinically, air-conduction thresholds inside and outside the sound booth, corresponded within 5 dB or less > 90% of instances (mean absolute difference 3.3 ± 3.2 SD). Bone conduction thresholds corresponded within 5 dB or less in 80% of comparisons between test environments, with a mean absolute difference of 4.6 dB (3.7 SD). Threshold differences were not statistically significant. Mean absolute test-retest differences outside the sound booth was similar to those in the booth. Diagnostic pure-tone audiometry outside a sound booth, using automated testing, improved passive attenuation, and real-time environmental noise monitoring demonstrated reliable hearing assessments.

  19. Speech comprehension difficulties in chronic tinnitus and its relation to hyperacusis

    Directory of Open Access Journals (Sweden)

    Veronika Vielsmeier

    2016-12-01

    Full Text Available AbstractObjectiveMany tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1 estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2 compare subjective reports of speech comprehension difficulties with objective measurements in a standardized speech comprehension test and to (3 explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram, as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and MethodsSpeech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessment (pure tone audiometry, tinnitus pitch and loudness matching, the Goettingen sentence test (in quiet for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments (How would you rate your ability to understand speech?; How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?. Results Subjectively reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation. 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test. Subjective speech comprehension complaints (both in general and in noisy environment were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated

  20. Speech impairment (adult)

    Science.gov (United States)

    Language impairment; Impairment of speech; Inability to speak; Aphasia; Dysarthria; Slurred speech; Dysphonia voice disorders ... but anyone can develop a speech and language impairment suddenly, usually in a trauma. APHASIA Alzheimer disease ...

  1. Speech and Swallowing

    Science.gov (United States)

    ... You are here Home › Speech and Swallowing Problems Speech and Swallowing Problems People with Parkinson’s may notice ... How do I know if I have a speech or voice problem? My voice makes it difficult ...

  2. Speech and Language Impairments

    Science.gov (United States)

    ... What? (log-in required) Select Page Speech and Language Impairments Jun 16, 2010 A legacy disability fact ... 11] Back to top Development of Speech and Language Skills in Childhood Speech and language skills develop ...

  3. 78 FR 49717 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ... COMMISSION 47 CFR Part 64 Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With... Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay...

  4. Measuring Sound-Processor Threshold Levels for Pediatric Cochlear Implant Recipients Using Conditioned Play Audiometry via Telepractice

    Science.gov (United States)

    Goehring, Jenny L.; Hughes, Michelle L.

    2017-01-01

    Purpose: This study evaluated the use of telepractice for measuring cochlear implant (CI) behavioral threshold (T) levels in children using conditioned play audiometry (CPA). The goals were to determine whether (a) T levels measured via telepractice were not significantly different from those obtained in person, (b) response probability differed…

  5. National Survey of State Identification Audiometry Programs and Special Educational Services for Hearing Impaired Children and Youth United States: 1972.

    Science.gov (United States)

    Gallaudet Coll., Washington, DC. Office of Demographic Studies.

    Reported were descriptive data concerning identification audiometry (hearing screening) and special educational programs for the hearing impaired. Data were provided in tabular format for each state in the country and the District of Columbia. Hearing screening program data included extent of coverage, grade or ages covered annually, year and…

  6. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  7. Relationship between high-resolution computed tomography densitometry and audiometry in otosclerosis.

    Science.gov (United States)

    Zhu, Mei-mei; Sha, Yan; Zhuang, Pei-yun; Olszewski, Aleksandra E; Jiang, Jia-qi; Xu, Jiang-hong; Xu, Chen-mei; Chen, Bing

    2010-12-01

    The aim of this study is to evaluate the usefulness of high-resolution computed tomography (HRCT) densitometry in the diagnosis of otosclerosis and to investigate the relationship between CT densitometry and audiometry. HRCT findings and audiometry were compared among 34 patients (34 ears, the otosclerosis group) with surgically confirmed otosclerosis between January 2007 and December 2007 and 33 patients (33 opposite normal ears, the control group) with facial paralysis diagnosed at the same period of time. Seven regions of interest (ROI) were set manually around the otic capsule on the axial slice of 0.75-mm-thick CT image. The mean CT values of these seven regions were measured. In each ROI, the mean CT value of the otosclerosis group and that of the control group were compared. Based on the CT findings, the ears with otosclerosis were classified into two groups: Group A showed no pathological CT findings; Group B showed low density around the cochlea. In the otosclerosis group, the relationship between the findings of CT and the results of audiometry was analyzed. The mean CT values in the area posterior to the oval window and anterior to the oval window were significantly lower for the otosclerosis group compared with the control group (the former t=-2.030, p=0.046; the latter Z=-4.979, p<0.01). Group A consisted of 30 patients, 7 of which (23.33%) exhibited conductive hearing loss, and 23 of which (76.67%) exhibited mixed hearing loss; Group B had 4 patients, all with mixed hearing loss. For the otosclerosis group, the mean CT value in the area posterior to the oval window was positively correlated with the mean air conduction threshold (r=0.4273, p=0.0117) and with the mean air-bone gap (r=0.3995, p=0.0192). Quantitative evaluation of CT with slices less than 1mm in thickness may provide important information for the diagnosis and assessment of otosclerosis which are unattainable through other methods. Copyright (c) 2010 Elsevier Ireland Ltd. All rights

  8. Accuracy of Mobile-Based Audiometry in the Evaluation of Hearing Loss in Quiet and Noisy Environments.

    Science.gov (United States)

    Saliba, Joe; Al-Reefi, Mahmoud; Carriere, Junie S; Verma, Neil; Provencal, Christiane; Rappaport, Jamie M

    2017-04-01

    Objectives (1) To compare the accuracy of 2 previously validated mobile-based hearing tests in determining pure tone thresholds and screening for hearing loss. (2) To determine the accuracy of mobile audiometry in noisy environments through noise reduction strategies. Study Design Prospective clinical study. Setting Tertiary hospital. Subjects and Methods Thirty-three adults with or without hearing loss were tested (mean age, 49.7 years; women, 42.4%). Air conduction thresholds measured as pure tone average and at individual frequencies were assessed by conventional audiogram and by 2 audiometric applications (consumer and professional) on a tablet device. Mobile audiometry was performed in a quiet sound booth and in a noisy sound booth (50 dB of background noise) through active and passive noise reduction strategies. Results On average, 91.1% (95% confidence interval [95% CI], 89.1%-93.2%) and 95.8% (95% CI, 93.5%-97.1%) of the threshold values obtained in a quiet sound booth with the consumer and professional applications, respectively, were within 10 dB of the corresponding audiogram thresholds, as compared with 86.5% (95% CI, 82.6%-88.5%) and 91.3% (95% CI, 88.5%-92.8%) in a noisy sound booth through noise cancellation. When screening for at least moderate hearing loss (pure tone average >40 dB HL), the consumer application showed a sensitivity and specificity of 87.5% and 95.9%, respectively, and the professional application, 100% and 95.9%. Overall, patients preferred mobile audiometry over conventional audiograms. Conclusion Mobile audiometry can correctly estimate pure tone thresholds and screen for moderate hearing loss. Noise reduction strategies in mobile audiometry provide a portable effective solution for hearing assessments outside clinical settings.

  9. Tablet Audiometry in Canada's North: A Portable and Efficient Method for Hearing Screening.

    Science.gov (United States)

    Rourke, Ryan; Kong, David Chan Chun; Bromwich, Matthew

    2016-09-01

    Access to hearing health care is limited in many parts of the world, creating a lack of prompt diagnosis, which further complicates treatment. The use of portable audiometry for hearing loss testing can improve access to diagnostics in marginalized populations. Our study objectives were twofold: (1) to determine the prevalence of hearing loss in children aged 4 to 11 years in Iqaluit, Nunavut, and (2) to test and demonstrate the use of our tablet audiometer as a portable hearing-testing device in a remote location. Prospective cross-sectional observational. Remote elementary schools in 3 Canadian Northern communities. Tablet audiometers were used to test hearing in 218 children. Air conduction pure tones thresholds were obtained at 500, 1000, 2000, and 4000 Hz. Children with hearing loss ≥30 dB in either ear were referred for audiology services. Tablet audiometry screening testing revealed abnormal results in 14.8% of the study participants. No significant difference in the rate of hearing loss was seen by sex; however, the rate of hearing loss decreased significantly with increasing age. The median duration of the hearing test was 5 minutes 30 seconds. Of the study population, 14.8% tested positive for hearing loss based on our interactive tablet audiometer. In this setting, the tablet audiometer was both time efficient and largely language independent. This type of testing is valuable for providing much-needed hearing health care for high-risk populations in rural and remote areas where audiology services are often unavailable. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  10. Can routine office-based audiometry predict cochlear implant evaluation results?

    Science.gov (United States)

    Gubbels, Samuel P; Gartrell, Brian C; Ploch, Jennifer L; Hanson, Kevin D

    2017-01-01

    Determining cochlear implant candidacy requires a specific sentence-level testing paradigm in best-aided conditions. Our objective was to determine if findings on routine audiometry could predict the results of a formal cochlear implant candidacy evaluation. We hypothesize that findings on routine audiometry will accurately predict cochlear implant evaluation results in the majority of candidates. Retrospective, observational, diagnostic study. The charts of all adult patients who were evaluated for implant candidacy at a tertiary care center from June 2008 through June 2013 were included. Routine, unaided audiologic measures (pure-tone hearing thresholds and recorded monosyllabic word recognition testing) were then correlated with best-aided sentence-level discrimination testing (using either the Hearing in Noise Test or AzBio sentences test). The degree of hearing loss at 250 to 4,000 Hz and monosyllabic word recognition scores significantly correlated with sentence-level word discrimination test results. Extrapolating from this association, we found that 86% of patients with monosyllabic word recognition scores at or below 32% (or 44% for patients with private insurance) would meet candidacy requirements for cochlear implantation. Routine audiometric findings can be used to identify patients who are likely to meet cochlear implant candidacy upon formal testing. For example, patients with pure-tone thresholds (250, 500, 1,000 Hz) of ≥75 dB and/or a monosyllabic word recognition test score of ≤40% have a high likelihood of meeting candidacy criteria. Utilization of these predictive patterns during routine audiometric evaluation may assist hearing health professionals in deciding when to refer patients for a formal cochlear implant evaluation. 4 Laryngoscope, 127:216-222, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Factors affecting reliability and validity of self-directed automatic in situ audiometry: implications for self-fitting hearing AIDS.

    Science.gov (United States)

    Convery, Elizabeth; Keidser, Gitte; Seeto, Mark; Yeend, Ingrid; Freeston, Katrina

    2015-01-01

    A reliable and valid method for the automatic in situ measurement of hearing thresholds is a prerequisite for the feasibility of a self-fitting hearing aid, whether such a device becomes an automated component of an audiological management program or is fitted by the user independently of a clinician. Issues that must be addressed before implementation of the procedure into a self-fitting hearing aid include the role of real-ear-to-dial difference correction factors in ensuring accurate results and the ability of potential users to successfully self-direct the procedure. The purpose of this study was to evaluate the reliability and validity of an automatic audiometry algorithm that is fully implemented in a wearable hearing aid, to determine to what extent reliability and validity are affected when the procedure is self-directed by the user, and to investigate contributors to a successful outcome. Design was a two-phase correlational study. A total of 60 adults with mild to moderately severe hearing loss participated in both studies: 20 in Study 1 and 40 in Study 2. Twenty-seven participants in Study 2 attended with a partner. Participants in both phases were selected for inclusion if their thresholds were within the output limitations of the test device. In both phases, participants performed automatic audiometry through a receiver-in-canal, behind-the-ear hearing aid coupled to an open dome. In Study 1, the experimenter directed the task. In Study 2, participants followed a set of written, illustrated instructions to perform automatic audiometry independently of the experimenter, with optional assistance from a lay partner. Standardized measures of hearing aid self-efficacy, locus of control, cognitive function, health literacy, and manual dexterity were administered. Statistical analysis examined the repeatability of automatic audiometry; the match between automatically and manually measured thresholds; and contributors to successful, independent completion of

  12. Speech-language pathology findings in patients with mouth breathing: multidisciplinary diagnosis according to etiology.

    Science.gov (United States)

    Junqueira, Patrícia; Marchesan, Irene Queiroz; de Oliveira, Luciana Regina; Ciccone, Emílio; Haddad, Leonardo; Rizzo, Maria Cândida

    2010-11-01

    The purpose of this study was to identify and compare the results of the findings from speech-language pathology evaluations for orofacial function including tongue and lip rest postures, tonus, articulation and speech, voice and language, chewing, and deglutition in children who had a history of mouth breathing. The diagnoses for mouth breathing included: allergic rhinitis, adenoidal hypertrophy, allergic rhinitis with adenoidal hypertrophy; and/or functional mouth breathing. This study was conducted with on 414 subjects of both genders, from 2 to 16-years old. A team consisting of 3 speech-language pathologists, 1 pediatrician, 1 allergist, and 1 otolaryngologist, evaluated the patients. Multidisciplinary clinical examinations were carried out (complete blood counting, X-rays, nasofibroscopy, audiometry). The two most commonly found etiologies were allergic rhinitis, followed by functional mouth breathing. Of the 414 patients in the study, 346 received a speech-language pathology evaluation. The most prevalent finding in this group of 346 subjects was the presence of orofacial myofunctional disorders. The most frequently orofacial myofunctional disorder identified in these subjects who also presented mouth breathing included: habitual open lips rest posture, low and forward tongue rest posture and lack of adequate muscle tone. There were also no statistically significant relationships identified between etiology and speech-language diagnosis. Therefore, the specific type of etiology of mouth breathing does not appear to contribute to the presence, type, or number of speech-language findings which may result from mouth breathing behavior.

  13. An overview of changes in pressure values of the middle ear using impedance audiometry among diver candidates in a hyperbaric chamber before and after a pressure test

    Science.gov (United States)

    Anoraga, J. S.; Bramantyo, B.; Bardosono, S.; Simanungkalit, S. H.; Basiruddin, J.

    2017-08-01

    Impedance audiometry is not yet routinely used in pressure tests, especially in Indonesia. Direct exposure to pressure in a hyperbaric chamber is sometimes without any assessment of the middle ear or the Eustachian tube function (ETF) of ventilation. Impedance audiometry examinations are important to assess ETF ventilation. This study determined the middle ear pressure value changes associated with the ETF (ventilation) of prospective divers. This study included 29 prospective divers aged 20-40 years without conductive hearing loss. All subjects underwent a modified diving impedance audiometry examination both before and after the pressure test in a double-lock hyperbaric chamber. Using the Toynbee maneuver, the values obtained for changes of pressure in the middle ear were significant before and after the pressure test in the right and left ears: p audiometry examination is necessary for the selection of candidate divers undergoing pressure tests within a hyperbaric chamber.

  14. Effect of hearing aids use on speech stimulus decoding through speech-evoked ABR.

    Science.gov (United States)

    Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Gândara, Mara; Garbi, Sergio; Bento, Ricardo Ferreira; Matas, Carla Gentile

    2016-12-08

    The electrophysiological responses obtained with the complex auditory brainstem response (cABR) provide objective measures of subcortical processing of speech and other complex stimuli. The cABR has also been used to verify the plasticity in the auditory pathway in the subcortical regions. To compare the results of cABR obtained in children using hearing aids before and after 9 months of adaptation, as well as to compare the results of these children with those obtained in children with normal hearing. Fourteen children with normal hearing (Control Group - CG) and 18 children with mild to moderate bilateral sensorineural hearing loss (Study Group - SG), aged 7-12 years, were evaluated. The children were submitted to pure tone and vocal audiometry, acoustic immittance measurements and ABR with speech stimulus, being submitted to the evaluations at three different moments: initial evaluation (M0), 3 months after the initial evaluation (M3) and 9 months after the evaluation (M9); at M0, the children assessed in the study group did not use hearing aids yet. When comparing the CG and the SG, it was observed that the SG had a lower median for the V-A amplitude at M0 and M3, lower median for the latency of the component V at M9 and a higher median for the latency of component O at M3 and M9. A reduction in the latency of component A at M9 was observed in the SG. Children with mild to moderate hearing loss showed speech stimulus processing deficits and the main impairment is related to the decoding of the transient portion of this stimulus spectrum. It was demonstrated that the use of hearing aids promoted neuronal plasticity of the Central Auditory Nervous System after an extended time of sensory stimulation. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  15. The role of ultrahigh-frequency audiometry in the early detection of systemic drug-induced hearing loss.

    Science.gov (United States)

    Singh Chauhan, Rajeev; Saxena, Ravinder Kumar; Varshey, Saurabh

    2011-05-01

    In monitoring patients for drug-induced hearing loss, most audiometric evaluations are limited to the range of frequencies from 0.25 to 8 kHz. However, such testing would fail to detect ototoxicity in patients who have already experienced hearing loss in the ultrahigh frequencies from 10 to 20 kHz. Awareness of ultrahigh-frequency ototoxicity could lead to changes in a drug regimen to prevent further damage. We conducted a prospective study of 105 patients who were receiving a potentially ototoxic drug-either gentamicin, amikacin, or cisplatin-to assess the value of ultrahigh-frequency audiometry in detecting systemic drug-induced hearing loss. We found that expanding audiometry into the ultrahigh-frequency range led to the detection of a substantial number of cases of hearing loss that would have otherwise been missed.

  16. The Role of Audiometry prior to High-Dose Cisplatin in Patients with Head and Neck Cancer.

    Science.gov (United States)

    Caballero, Miguel; Mackers, Paula; Reig, Oscar; Buxo, Elvira; Navarrete, Pilar; Blanch, Jose L; Grau, Juan J

    2017-01-01

    To analyze the role of audiometry in considering change to a less ototoxic treatment in head and neck cancer (HNC) patients. Consecutive patients prescribed high-dose cisplatin (100 mg/m2) between January 2013 and February 2015 were enrolled. Audiometry was performed at baseline and before cisplatin. Change to a less ototoxic agent or reduced cisplatin dose was considered with audiometric decreases >25 dB. A total of 103 patients were included; the median age of the patients was 59 years (range 18-75). Cisplatin was intended curative (58%), adjuvant (32%), or palliative (10%). Forty-two participants (41%) did not commence high-dose cisplatin because of baseline audiometric alterations. Of 61 patients treated with high-dose cisplatin, 40 (66%) showed marked ototoxicity at the end of treatment. The mean hearing loss between initial and final audiometries showed a hearing loss at 4 and 8 kHz in both ears (p = 0.002). Thirteen patients switched to carboplatin and 15 to a lower dose of cisplatin. The outcome was not significantly altered when cisplatin was replaced with carboplatin or cetuximab. Audiometric alterations are common in HNC with high-dose cisplatin, and switching to a less ototoxic regimen does not adversely affect outcome. Audiometric examination could help to prevent hearing loss in this population. © 2017 S. Karger AG, Basel.

  17. Speech-Language Pathologists

    Science.gov (United States)

    ... State & Area Data Explore resources for employment and wages by state and area for speech-language pathologists. Similar Occupations Compare the job duties, education, job growth, and pay of speech-language pathologists ...

  18. Speech disorders - children

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/001430.htm Speech disorders - children To use the sharing features on this page, ... Voice disorders Speech disorders are different from language disorders in children . Language disorders refer to someone having difficulty with: ...

  19. Apraxia of Speech

    Science.gov (United States)

    ... children, such as in a classroom. Therefore, speech-language therapy is necessary for children with AOS as well ... with AOS. Frequent, intensive, one-on-one speech-language therapy sessions are needed for both children and adults ...

  20. Speech perception as categorization

    National Research Council Canada - National Science Library

    Holt, Lori L; Lotto, Andrew J

    2010-01-01

    Speech perception (SP) most commonly refers to the perceptual mapping from the highly variable acoustic speech signal to a linguistic representation, whether it be phonemes, diphones, syllables, or words...

  1. Is behavioral audiometry achievable in infants younger than 6 months of age?

    Science.gov (United States)

    Delaroche, Monique; Gavilan-Cellié, Isabelle; Maurice-Tison, Sylvie; Kpozehouen, Alphonse; Dauman, René

    2011-12-01

    When carried out in addition to objective tests, behavioral audiometry performed in children with the so-called "Delaroche protocol" [IJORL 68 (2004) 1233-1243] enables to determine hearing thresholds by air and bone conduction over the whole auditory frequency range. In the present report, seventy-three hearing-impaired infants with different levels of motor and cognitive development were tested behaviorally before 6 months of age. Reliability of these early determined behavioral thresholds was then after analyzed using: (a) cross-sectional study, and (b) longitudinal study. Cross-sectional study compared click-evoked ABR thresholds in the better ear with binaural high-frequency hearing thresholds. In longitudinal study, early measured binaural hearing thresholds from 500 through 4000 Hz were reassessed at 18 months. In 13% of babies behavioral testing was not fully completed by 6 months of age. Nevertheless, both cross-sectional and longitudinal studies yielded intraclass correlation coefficients above 0.80, suggesting that behavioral testing is applicable to this very young population. Assessment of hearing after newborn screening should not be restricted to objective tests before 5 ½ months. It should also include bone- and air-conduction behavioral tests adjusted to developmental stage and performed in presence of parents. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. The effect of head movement and head positioning on sound field audiometry.

    Science.gov (United States)

    Shaw, Paul; Greenwood, Hannah

    2012-06-01

    Positioning and maintaining the subject's head at the calibration point (CP) of the sound field (SF) during SF assessment remains a challenge. The purpose of this study was to investigate the sound pressure level (SPL) at head positions likely to be encountered in routine audiological practice. Eight National Health Service SF clinics were used to obtain SPL measurements. Part 1 of the study investigated SPL variability at positions around the CP (0.15 m and 0.30 m). Parts 2 and 3 of the study, investigated the SPL at two typical head heights of the infant population. Only sound field measures were obtained. Part 1: 32% and 40% of measurements of SPL around the CP were >2 dB different from the SPL at the CP (0.15 m and 0.30 m). Parts 2 and 3: 55% and 38% of measurements of SPL, at the two infant head heights, were >2 dB from the SPL at the CP. Variability in SPL, due to head movement, is to be expected when performing SF audiometry. Furthermore, the typical head heights of infants will introduce additional variability, unless the position of the CP is chosen carefully.

  3. Rapid Systematic Review of Normal Audiometry Results as a Predictor for Benign Paroxysmal Positional Vertigo.

    Science.gov (United States)

    Dorresteijn, Paul M; Ipenburg, Norbertus A; Murphy, Kathryn J; Smit, Michelle; van Vulpen, Jonna K; Wegner, Inge; Stegeman, Inge; Grolman, Wilko

    2014-06-01

    To evaluate whether absence of hearing loss on pure-tone audiometry (PTA) is reliable as a diagnostic test for predicting benign paroxysmal positional vertigo (BPPV) in adult patients with vertigo. PubMed, Embase, and the Cochrane Library. A systematic literature search was conducted on December 10, 2013. Relevant publications were selected based on title, abstract, and full text. Selected articles were assessed for relevance and risk of bias using predetermined criteria. Prevalence and the positive and negative predictive value (PPV and NPV) were extracted. Of 603 retrieved publications, 1 article with high relevance and moderate risk of bias was included. In this study, the prevalence of BPPV was 28%. The PPV of hearing loss assessed by PTA was 31% (95% CI, 17-49) and the NPV was 73% (95% CI, 61-83). The absence of hearing loss on PTA decreased the risk of BPPV by 1%. There is insufficient high-quality evidence regarding the diagnostic value of the absence of hearing loss, assessed by PTA, for predicting BPPV in adult patients with vertigo. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.

  4. Efficacy of earphones for 12- to 24-month-old children during visual reinforcement audiometry.

    Science.gov (United States)

    Weiss, Allyson D; Karzon, Roanne K; Ead, Banan; Lieu, Judith E C

    2016-01-01

    Efficacy of insert and supra-aural earphones during visual reinforcement audiometry (VRA) was investigated for 12- to 24-month-old children. VRA testing began in the soundfield and transitioned to either insert or supra-aural earphones. Audiologists recorded threshold estimates, participant behaviors, and an overall subjective rating of earphone acceptance. One hundred and eighty-six 12- to 24-month-old children referred to the Department of Audiology at St. Louis Children's Hospital for a variety of reasons. Subjective ratings indicated high acceptance of insert earphones (84%) and supra-aural earphones (80%) despite negative behaviors. There was no significant difference in the number of threshold estimates based on earphone type for 12- to 17-month-old participants. Participants in the 18- to 24-month-old age group provided significantly more threshold estimates with insert earphones (mean = 5.3 threshold estimates, SD = 3.5) than with supra-aural earphones (mean = 2.9 threshold estimates, SD = 2.9). All seven participants who rejected earphone placement were successfully reconditioned for soundfield testing. Data support the use of insert earphones during VRA, especially with 18-to 24-month-old children, to obtain ear-specific information.

  5. Contrast sensitivity test and conventional and high frequency audiometry: information beyond that required to prescribe lenses and headsets

    Science.gov (United States)

    Comastri, S. A.; Martin, G.; Simon, J. M.; Angarano, C.; Dominguez, S.; Luzzi, F.; Lanusse, M.; Ranieri, M. V.; Boccio, C. M.

    2008-04-01

    In Optometry and in Audiology, the routine tests to prescribe correction lenses and headsets are respectively the visual acuity test (the first chart with letters was developed by Snellen in 1862) and conventional pure tone audiometry (the first audiometer with electrical current was devised by Hartmann in 1878). At present there are psychophysical non invasive tests that, besides evaluating visual and auditory performance globally and even in cases catalogued as normal according to routine tests, supply early information regarding diseases such as diabetes, hypertension, renal failure, cardiovascular problems, etc. Concerning Optometry, one of these tests is the achromatic luminance contrast sensitivity test (introduced by Schade in 1956). Concerning Audiology, one of these tests is high frequency pure tone audiometry (introduced a few decades ago) which yields information relative to pathologies affecting the basal cochlea and complements data resulting from conventional audiometry. These utilities of the contrast sensitivity test and of pure tone audiometry derive from the facts that Fourier components constitute the basis to synthesize stimuli present at the entrance of the visual and auditory systems; that these systems responses depend on frequencies and that the patient's psychophysical state affects frequency processing. The frequency of interest in the former test is the effective spatial frequency (inverse of the angle subtended at the eye by a cycle of a sinusoidal grating and measured in cycles/degree) and, in the latter, the temporal frequency (measured in cycles/sec). Both tests have similar duration and consist in determining the patient's threshold (corresponding to the inverse multiplicative of the contrast or to the inverse additive of the sound intensity level) for each harmonic stimulus present at the system entrance (sinusoidal grating or pure tone sound). In this article the frequencies, standard normality curves and abnormal threshold shifts

  6. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  8. Examination of Hearing in a Rheumatoid Arthritis Population: Role of Extended-High-Frequency Audiometry in the Diagnosis of Subclinical Involvement

    Directory of Open Access Journals (Sweden)

    Mar Lasso de la Vega

    2016-01-01

    Full Text Available Objective. The aim of this study is to analyze the high-frequency hearing levels in patients with rheumatoid arthritis and to determine the relationship between hearing loss, disease duration, and immunological parameters. Materials and Methods. A descriptive cross-sectional study including fifty-three patients with rheumatoid arthritis was performed. The control group consisted of 71 age- and sex-matched patients from the study population (consecutively recruited in Madrid “Area 9,” from January 2010 to February 2011. Both a pure tone audiometry and an extended-high-frequency audiometry were performed. Results. Extended-high-frequency audiometry diagnosed sensorineural hearing loss in 69.8% of the patients which exceeded the results obtained with pure tone audiometry (43% of the patients. This study found significant correlations in patients with sensorineural hearing loss related to age, sex, and serum anti-cardiolipin (aCL antibody levels. Conclusion. Sensorineural hearing loss must be considered within the clinical context of rheumatoid arthritis. Our results demonstrated that an extended-high-frequency audiometry is a useful audiological test that must be performed within the diagnostic and follow-up testing of patients with rheumatoid arthritis, providing further insight into a disease-modifying treatment or a hearing loss preventive treatment.

  9. Examination of Hearing in a Rheumatoid Arthritis Population: Role of Extended-High-Frequency Audiometry in the Diagnosis of Subclinical Involvement.

    Science.gov (United States)

    Lasso de la Vega, Mar; Villarreal, Ithzel Maria; Lopez-Moya, Julio; Garcia-Berrocal, Jose Ramon

    2016-01-01

    Objective. The aim of this study is to analyze the high-frequency hearing levels in patients with rheumatoid arthritis and to determine the relationship between hearing loss, disease duration, and immunological parameters. Materials and Methods. A descriptive cross-sectional study including fifty-three patients with rheumatoid arthritis was performed. The control group consisted of 71 age- and sex-matched patients from the study population (consecutively recruited in Madrid "Area 9," from January 2010 to February 2011). Both a pure tone audiometry and an extended-high-frequency audiometry were performed. Results. Extended-high-frequency audiometry diagnosed sensorineural hearing loss in 69.8% of the patients which exceeded the results obtained with pure tone audiometry (43% of the patients). This study found significant correlations in patients with sensorineural hearing loss related to age, sex, and serum anti-cardiolipin (aCL) antibody levels. Conclusion. Sensorineural hearing loss must be considered within the clinical context of rheumatoid arthritis. Our results demonstrated that an extended-high-frequency audiometry is a useful audiological test that must be performed within the diagnostic and follow-up testing of patients with rheumatoid arthritis, providing further insight into a disease-modifying treatment or a hearing loss preventive treatment.

  10. Examination of Hearing in a Rheumatoid Arthritis Population: Role of Extended-High-Frequency Audiometry in the Diagnosis of Subclinical Involvement

    Science.gov (United States)

    Lasso de la Vega, Mar; Villarreal, Ithzel Maria; Lopez-Moya, Julio; Garcia-Berrocal, Jose Ramon

    2016-01-01

    Objective. The aim of this study is to analyze the high-frequency hearing levels in patients with rheumatoid arthritis and to determine the relationship between hearing loss, disease duration, and immunological parameters. Materials and Methods. A descriptive cross-sectional study including fifty-three patients with rheumatoid arthritis was performed. The control group consisted of 71 age- and sex-matched patients from the study population (consecutively recruited in Madrid “Area 9,” from January 2010 to February 2011). Both a pure tone audiometry and an extended-high-frequency audiometry were performed. Results. Extended-high-frequency audiometry diagnosed sensorineural hearing loss in 69.8% of the patients which exceeded the results obtained with pure tone audiometry (43% of the patients). This study found significant correlations in patients with sensorineural hearing loss related to age, sex, and serum anti-cardiolipin (aCL) antibody levels. Conclusion. Sensorineural hearing loss must be considered within the clinical context of rheumatoid arthritis. Our results demonstrated that an extended-high-frequency audiometry is a useful audiological test that must be performed within the diagnostic and follow-up testing of patients with rheumatoid arthritis, providing further insight into a disease-modifying treatment or a hearing loss preventive treatment. PMID:27239375

  11. An evaluation of the cross-check principle using visual reinforcement audiometry, otoacoustic emissions, and tympanometry.

    Science.gov (United States)

    Baldwin, Stacey M; Gajewski, Byron J; Widen, Judith E

    2010-03-01

    Early intervention to reduce the effects of congenital hearing loss requires accurate description of hearing loss. In pediatric audiology, a cross-check principle is used to compare behavioral and physiological tests. The purpose of this study was to investigate the correspondence of visual reinforcement audiometry (VRA) minimal response levels (MRLs), otoacoustic emissions (OAEs), tympanometry, and VRA test reliability to determine the odds of obtaining the expected cross-check results. We hypothesized that (1) when MRLs were within normal limits (WNL), OAEs would be present; (2) in the event of normal MRLs and absent OAEs, tympanograms would be abnormal; and (3) in the event of elevated MRLs and present OAEs, the tester's confidence in the MRLs would be judged to be only fair, rather than good. This was a retrospective study. A previous study provided data from 993 infants who had diagnostic audiologic evaluations at 8-12 mo of age. The data were analyzed to compare VRA MRLs with OAE signal-to-noise ratios at 1, 2, and 4 kHz. Odds ratios and 95% confidence intervals were calculated to test the three hypotheses related to the correspondence among MRLs, OAEs, tympanometry, and the reliability of MRLs. The probability that OAEs would be present when MRLs were WNL varied from 12 to 26 to 1, depending on the test frequency. When OAEs were absent in the presence of normal MRLs, the odds of abnormal tympanometry varied from 5 to 10 to 1, depending on the test frequency. When MRLs were elevated (>20 dB HL), the odds suggested that examiners judged the MRLs at 1 and 2 kHz to lack reliability. The results suggest that the cross-check principle is effective when employing VRA, OAE, and tympanometry to rule out or determine the degree, type, and configuration of hearing loss in infants. American Academy of Audiology.

  12. Behavioral audiometry: protocols for measuring hearing thresholds in babies aged 4-18 months.

    Science.gov (United States)

    Delaroche, Monique; Thiebaut, Rodolphe; Dauman, René

    2004-10-01

    This paper provides the first report in English of original behavioral audiometry protocols for measuring hearing thresholds in very young children, including the multiply handicapped. Based on reactions to one or two well-calibrated acoustic stimulations delivered in the sound field, the protocol first involves the use of a vibrator to measure hearing levels by bone conduction. This measurement technique, which is not affected by middle ear infections, is the key diagnostic step. Moreover, in profoundly hearing loss children, it triggers reactions through vibratory stimulation and sets the scene for the conditioning of responses. Next, hearing levels are assessed by air conduction with the aid of headphones, in order to measure hearing levels in each ear as early as possible. A unique set-up is used to facilitate the emergence of reliable "surprise reactions", which may be interpreted by a sole examiner. Classical visual reinforcement is replaced by a highly interactive, dynamic and playful exchange between child and examiner, which gives meaning to the perception of stimuli and heralds the learning of hearing. The results concern 105 babies suffering from bilateral sensorineural hearing loss and aged 4-18 months at the first behavioral test. Group 1 comprised 91 babies with no other handicap, in whom full bilateral air conduction was obtained in 82.4% before 12 months and in 98.9% before 18 months. In this group, air conduction in each ear was obtained in 47.0% before 12 months and in 70.3% before 18 months. In Group 2, which included 14 multiply handicapped babies, full bilateral air conduction was obtained in 37.5% before 12 months and in 78.6% before 18 months. Air conduction in both ears was obtained in 28.6% before 18 months. The protocols described make it possible, in a minimum number of sessions, to measure hearing thresholds early over the whole range of hearing frequencies, even in multiply handicapped babies and those suffering from developmental

  13. Assessment of hearing loss by pure-tone audiometry in patients with mucopolysaccharidoses.

    Science.gov (United States)

    Lin, Hsiang-Yu; Shih, Shou-Chuan; Chuang, Chih-Kuang; Lee, Kuo-Sheng; Chen, Ming-Ren; Lin, Hung-Ching; Chiu, Pao Chin; Niu, Dau-Ming; Lin, Shuan-Pei

    2014-04-01

    Patients with mucopolysaccharidoses (MPS) often have hearing loss. However, the characterization of hearing loss by pure-tone audiometry (PTA) in this rare disease population and its relationship to age and treatment is limited. PTA was performed in 39 patients with MPS (29 males and 10 females; 3 with MPS I, 21 with MPS II, 9 with MPS IVA, and 6 with MPS VI; median age, 11.9 years; age range, 4.4-34.2 years). The degree of hearing loss was classified by the age-independent World Health Organization (WHO) clinical guidelines. Hearing loss by PTA was present in 85% (33/39) of patients and was categorized as mild (26-40 dB) in 18%, moderate (41-60 dB) in 36%, severe (61-80 dB) in 23%, and profound (≥81dB) in 5%. Among the patients with hearing loss, 33% were classified as mixed type (conductive and sensorineural), 30% as pure conductive type, 27% as pure sensorineural type, and 9% were undefined. The means of the right and left ear hearing thresholds at 2000 and 4000 Hz by air conduction (AC) and at 500, 1000, 2000, and 4000 Hz by bone conduction (BC) were all positively correlated with age (p<0.05). In the 6 patients with MPS II or VI who underwent follow-up PTA after ventilation tube insertion and enzyme replacement therapy for 1.9 to 8.5 years, all showed improvements in AC and BC of the better ear, as well as in the air-bone gap. Hearing impairment is common in MPS. Early otolaryngological evaluation and intervention are recommended. These findings and the follow-up data can be used to develop quality of care strategies for patients with MPS. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Assessment of nasal-noise masking audiometry as a diagnostic test for patulous Eustachian tube.

    Science.gov (United States)

    Paradis, Justin; Bance, Manohar

    2015-02-01

    The primary objective is to assess the validity of nasal-noise masking audiometry (NNMA) as a clinical diagnostic tool in our patient population. Retrospective case review. Tertiary ambulatory referral center. Patients with patulous Eustachian tube (PET) were identified from referrals to our Eustachian tube disorders clinic primarily with symptoms including autophony, aural fullness, and hearing their own breathing. The healthy subjects had no history of ear disease. NNMA was measured in 20 ears of 10 healthy subjects as well as in 42 ears of 21 patients with suspected PET. NNMA mean auditory thresholds were measured at frequencies ranging from 250 to 8,000 Hz. When stratified as definitive or probable PET based on observed tympanic membrane movement with breathing, both Definitive and Probable PET groups had significantly higher NNMA mean auditory thresholds compared to Normal ears at 250 Hz (p = 0.001, p = 0.003), 1,000 Hz (p = 0.019, p = 0.001), and 6,000 Hz (p = 0.4, p = 0.001). When stratified based on symptoms on the day of testing, both Symptomatic Ears and Non-Symptomatic Ears had significantly higher mean auditory thresholds compared to Normal ears at 250 Hz (p = 0.001, p = 0.015) and at 1,000 Hz (p = 0.002, p = 0.004). Our results demonstrate a larger masking effect in patients with PET compared to normal subjects in the low-frequency region. In clinical practice, the relatively small effect and the wide variability of results between patients have made this test be of little value clinically in our patient population.

  15. Managing the reaction effects of speech disorders on speech ...

    African Journals Online (AJOL)

    Managing the reaction effects of speech disorders on speech defectives. ... DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access ... Unfortunately, it is the speech defectives that bear the consequences resulting from penalizing speech disorders. Consequences for punishing speech ...

  16. A comparison of conventional and in-situ audiometry on participants with varying levels of sensorineural hearing loss.

    Science.gov (United States)

    Kiessling, Jürgen; Leifholz, Melanie; Unkel, Steffen; Pons-Kühnemann, Jörn; Jespersen, Charlotte Thunberg; Pedersen, Jenny Nesgaard

    2015-01-01

    In-situ audiometry is a hearing aid feature that enables the measurement of hearing threshold levels through the hearing instrument using the built-in sound generator and the hearing aid receiver. This feature can be used in hearing aid fittings instead of conventional pure-tone audiometry (PTA), particularly in places where no standard audiometric equipment is available. Differences between conventional and in-situ thresholds are described and discussed for some particular hearing aids. No previous investigation has measured and compared these differences for a number of current hearing aid models by various manufacturers across a wide range of hearing losses. The purpose of this study was to perform a model-based comparison of conventionally and in-situ measured hearing thresholds. Data were collected for a range of hearing aid devices to study and generalize the effects that may occur under clinical conditions. Research design was an experimental and regression study. A total of 30 adults with sensorineural hearing loss served as test persons. They were assigned to three subgroups of 10 subjects with mild (M), moderate to severe (MS), and severe (S) sensorineural hearing loss. All 30 test persons underwent both conventional PTA and in-situ audiometry with four hearing aid models by various manufacturers. The differences between conventionally and in-situ measured hearing threshold levels were calculated and evaluated by an exploratory data analysis followed by a sophisticated statistical modeling process. At 500 and 1500 Hz, almost all threshold differences (conventional PTA minus in-situ data) were negative, i.e., in the low to mid frequencies, hearing loss was overestimated by most devices relative to PTA. At 4000 Hz, the majority of differences (7 of 12) were positive, i.e., in the frequency range above 1500 Hz, hearing loss was frequently underestimated. As hearing loss increased (M→MS→S), the effect of the underestimation decreased. At 500 and 1500 Hz

  17. Comparison of pure tone audiometry and auditory steady-state responses in subjects with normal hearing and hearing loss.

    Science.gov (United States)

    Ozdek, Ali; Karacay, Mahmut; Saylam, Guleser; Tatar, Emel; Aygener, Nurdan; Korkmaz, Mehmet Hakan

    2010-01-01

    The objective of this study is to compare pure tone audiometry and auditory steady-state response (ASSR) thresholds in normal hearing (NH) subjects and subjects with hearing loss. This study involved 23 NH adults and 38 adults with hearing loss (HI). After detection of behavioral thresholds (BHT) with pure tone audiometry, each subject was tested for ASSR responses in the same day. Only one ear was tested for each subject. The mean pure tone average was 9 ± 4 dB for NH group and 57 ± 14 for HI group. There was a very strong correlation between BHT and ASSR measurements in HI group. However, the correlation was weaker in the NH group. The mean differences of pure tone average of four frequencies (0.5, 1, 2, and 4 kHz) and ASSR threshold average of same frequencies were 13 ± 6 dB in NH group and 7 ± 5 dB in HI group and the difference was significant (P = 0.01). It was found that 86% of threshold difference values were less than 20 dB in NH group and 92% of threshold difference values were less than 20 dB in HI group. In conclusion, ASSR thresholds can be used to predict the configuration of pure tone audiometry. Results are more accurate in HI group than NH group. Although ASSR can be used in cochlear implant decision-making process, findings do not permit the utilization of the test for medico-legal reasons.

  18. Articulatory speech synthesis and speech production modelling

    Science.gov (United States)

    Huang, Jun

    This dissertation addresses the problem of speech synthesis and speech production modelling based on the fundamental principles of human speech production. Unlike the conventional source-filter model, which assumes the independence of the excitation and the acoustic filter, we treat the entire vocal apparatus as one system consisting of a fluid dynamic aspect and a mechanical part. We model the vocal tract by a three-dimensional moving geometry. We also model the sound propagation inside the vocal apparatus as a three-dimensional nonplane-wave propagation inside a viscous fluid described by Navier-Stokes equations. In our work, we first propose a combined minimum energy and minimum jerk criterion to estimate the dynamic vocal tract movements during speech production. Both theoretical error bound analysis and experimental results show that this method can achieve very close match at the target points and avoid the abrupt change in articulatory trajectory at the same time. Second, a mechanical vocal fold model is used to compute the excitation signal of the vocal tract. The advantage of this model is that it is closely coupled with the vocal tract system based on fundamental aerodynamics. As a result, we can obtain an excitation signal with much more detail than the conventional parametric vocal fold excitation model. Furthermore, strong evidence of source-tract interaction is observed. Finally, we propose a computational model of the fricative and stop types of sounds based on the physical principles of speech production. The advantage of this model is that it uses an exogenous process to model the additional nonsteady and nonlinear effects due to the flow mode, which are ignored by the conventional source- filter speech production model. A recursive algorithm is used to estimate the model parameters. Experimental results show that this model is able to synthesize good quality fricative and stop types of sounds. Based on our dissertation work, we carefully argue

  19. Speech intelligibility with and without noise in individuals exposed to electronic music.

    Science.gov (United States)

    Kuchar, Jéssica; Junqueira, Cássia Menin Cabrini

    2010-01-01

    Audiometry is the main way with which hearing is evaluated, because it is a universal and standardized test. Speech tests are difficult to standardize due to the variables involved, their performance in the presence of competitive noise is of great importance. To characterize speech intelligibility in silence and in competitive noise from individuals exposed to electronically amplified music. It was performed with 20 university students who presented normal hearing thresholds. The speech recognition rate (SRR) was performed after fourteen hours of sound rest after the exposure to electronically amplified music and once again after sound rest, being studied in three stages: without competitive noise, in the presence of Babble-type competitive noise, in monotic listening, in signal/noise ratio of +5 dB and with the signal/noise ratio of 5 dB. There was greater damage in the SRR after exposure to the music and with competitive noise, and as the signal/noise ratio decreases, the performance of individuals in the test also decreased. The inclusion of competitive noise in the speech tests in the audiological routine is important, because it represents the real disadvantage experienced by individuals in daily listening.

  20. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  1. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  2. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  3. Decreased Speech-In-Noise Understanding in Young Adults with Tinnitus

    Science.gov (United States)

    Gilles, Annick; Schlee, Winny; Rabau, Sarah; Wouters, Kristien; Fransen, Erik; Van de Heyning, Paul

    2016-01-01

    Objectives: Young people are often exposed to high music levels which make them more at risk to develop noise-induced symptoms such as hearing loss, hyperacusis, and tinnitus of which the latter is the symptom perceived the most by young adults. Although, subclinical neural damage was demonstrated in animal experiments, the human correlate remains under debate. Controversy exists on the underlying condition of young adults with normal hearing thresholds and noise-induced tinnitus (NIT) due to leisure noise. The present study aimed to assess differences in audiological characteristics between noise-exposed adolescents with and without NIT. Methods: A group of 87 young adults with a history of recreational noise exposure was investigated by use of the following tests: otoscopy, impedance measurements, pure-tone audiometry including high-frequencies, transient and distortion product otoacoustic emissions, speech-in-noise testing with continuous and modulated noise (amplitude-modulated by 15 Hz), auditory brainstem responses (ABR) and questionnaires.Nineteen students reported NIT due to recreational noise exposure, and their measures were compared to the non-tinnitus subjects. Results: No significant differences between tinnitus and non-tinnitus subjects could be found for hearing thresholds, otoacoustic emissions, and ABR results.Tinnitus subjects had significantly worse speech reception in noise compared to non-tinnitus subjects for sentences embedded in steady-state noise (mean speech reception threshold (SRT) scores, respectively −5.77 and −6.90 dB SNR; p = 0.025) as well as for sentences embedded in 15 Hz AM-noise (mean SRT scores, respectively −13.04 and −15.17 dB SNR; p = 0.013). In both groups speech reception was significantly improved during AM-15 Hz noise compared to the steady-state noise condition (p audiometry, OAE, and ABR.However, tinnitus patients showed decreased speech-in-noise reception. The results are discussed in the light of previous

  4. Valproate-induced reversible sensorineural hearing loss: a case report with serial audiometry and pharmacokinetic modelling during a valproate rechallenge.

    Science.gov (United States)

    Yeap, Li-Ling; Lim, Kheng-Seang; Lo, Yoke-Lin; Bakar, Mohd Zukiflee Abu; Tan, Chong-Tin

    2014-09-01

    Hearing loss has been reported with valproic acid (VPA) use. However, this is the first case of VPA-induced hearing loss that was tested and confirmed with a VPA rechallenge, supported by serial audiometry and pharmacokinetic modelling. A 39-year-old truck driver with temporal lobe epilepsy was treated with VPA at 400 mg, twice daily, and developed hearing loss after each dose, but recovered within three hours. Hearing loss fully resolved after VPA discontinuation. Audiometry performed five hours after VPA rechallenge showed significant improvement in hearing thresholds. Pharmacokinetic modelling during the VPA rechallenge showed that hearing loss occurred at a level below the therapeutic range. Brainstem auditory evoked potential at three months after VPA discontinuation showed bilateral conduction defect between the cochlear and superior olivary nucleus, supporting a pre-existing auditory deficit. VPA may cause temporary hearing threshold shift. Pre-existing auditory defect may be a risk factor for VPA-induced hearing loss. Caution should be taken while prescribing VPA to patients with pre-existing auditory deficit.

  5. Frequency-specific electric response audiometry (ERA) and its clinical application in the diagnosis of hearing defects in the dog.

    Science.gov (United States)

    Schacks, S; Rohn, K; Hauschild, G

    2006-03-01

    Reference values were established for frequency-specific electric response audiometry (ERA) in dogs on the basis of the results of ERA examinations of 200 animals with normal hearing. Air-conducting acoustic tubes with foam stoppers were used in the determination of the following: the latencies of waves I, III and V; interpeak latencies (IPL) I-III, III-V and I-V; amplitudes I and V; and the amplitude difference I-V. A frequency-specific stimulus (tone pip) was used for frequency-specific examination (1 to 4 kHz) over the entire frequency range indicated. These reference values were then used for the clinical examination of 50 dogs with hearing defects. A frequency-specific ERA was conducted and the results evaluated. These findings made it possible to draw objective conclusions about the degree, type and site of the hearing defects. Frequency-specific electric response audiometry was shown to be an important diagnostic tool for the detection of partial high- and low-frequency hearing loss and for the characterisation of hearing defects of otological, otoneurological and neurological origin.

  6. Physics and Speech Therapy.

    Science.gov (United States)

    Duckworth, M.; Lowe, T. L.

    1986-01-01

    Describes development and content of a speech science course taught to speech therapists for two years, modified by feedback from those two classes. Presents basic topics and concepts covered. Evaluates a team teaching approach as well as the efficacy of teaching physics relevant to vocational interests. (JM)

  7. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  8. Tracking Speech Sound Acquisition

    Science.gov (United States)

    Powell, Thomas W.

    2011-01-01

    This article describes a procedure to aid in the clinical appraisal of child speech. The approach, based on the work by Dinnsen, Chin, Elbert, and Powell (1990; Some constraints on functionally disordered phonologies: Phonetic inventories and phonotactics. "Journal of Speech and Hearing Research", 33, 28-37), uses a railway idiom to track gains in…

  9. Transitions as Speech Acts.

    Science.gov (United States)

    Fesmire, Alice Ann

    1993-01-01

    Reviews speech act theory to explain the function of writing transitions in terms of the illocutionary and perlocutionary effect of explicit performatives. Identifies explicit performatives in samples of professional writing in technical and academic areas. Suggest ways to revise textbooks to include the findings from speech act theory. (SR)

  10. Personality and Simultaneous Speech.

    Science.gov (United States)

    Feldstein, Stanley; And Others

    The purpose of the study was to examine the relationship of the limitation and outcome of simultaneous speech to those dimensions of personality indexes by Cattell's 16PF Questionnaire. More than 500 conversations of 24 female college students were computer-analyzed for instances of simultaneous speech, and the frequencies with which they…

  11. Automatic speech recognition

    Science.gov (United States)

    Espy-Wilson, Carol

    2005-04-01

    Great strides have been made in the development of automatic speech recognition (ASR) technology over the past thirty years. Most of this effort has been centered around the extension and improvement of Hidden Markov Model (HMM) approaches to ASR. Current commercially-available and industry systems based on HMMs can perform well for certain situational tasks that restrict variability such as phone dialing or limited voice commands. However, the holy grail of ASR systems is performance comparable to humans-in other words, the ability to automatically transcribe unrestricted conversational speech spoken by an infinite number of speakers under varying acoustic environments. This goal is far from being reached. Key to the success of ASR is effective modeling of variability in the speech signal. This tutorial will review the basics of ASR and the various ways in which our current knowledge of speech production, speech perception and prosody can be exploited to improve robustness at every level of the system.

  12. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  13. Age-related hearing loss in dogs : Diagnosis with Brainstem-Evoked Response Audiometry and Treatment with Vibrant Soundbridge Middle Ear Implant.

    NARCIS (Netherlands)

    ter Haar, G.|info:eu-repo/dai/nl/304828750

    2009-01-01

    Age-related hearing loss (ARHL) is the most common cause of acquired hearing impairment in dogs. Diagnosis requires objective electrophysiological tests (brainstem evoked response audiometry [BERA]) evaluating the entire audible frequency range in dogs. In our laboratory a method was developed to

  14. Brainstem response audiometry in the determination of low-frequency hearing loss : a study of various methods for frequency-specific ABR-threshold assessment

    NARCIS (Netherlands)

    E.A.G.J. Conijn

    1992-01-01

    textabstractBrainstem Electric Response Audiometry (BERA) is a method to visualize some of the electric activity generated in the auditory nerve and the brainstem during the processing of sound. The amplitude of the Auditory Brainstem Response (ABR) is very small (0.05-0.5 flV). The potentials

  15. Operant Audiometry Manual for Difficult-to-Test Children. Institute on Mental Retardation and Intellectual Development; Papers and Reports, Volume V, Number 19.

    Science.gov (United States)

    Bricker, Diane D.; And Others

    To facilitate the use of operant audiometry with low functioning children (psychotic, severely retarded, or multiply handicapped), a procedures manual was developed containing definitions of terms, instructions for determining reinforcers, physical facilities and equipment needs, diagrams, component lists, and technical descriptions. Development…

  16. A user-operated audiometry method based on the maximum likelihood principle and the two-alternative forced-choice paradigm

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben

    2014-01-01

    Objective: To create a user-operated pure-tone audiometry method based on the method of maximum likelihood (MML) and the two-alternative forced-choice (2AFC) paradigm with high test-retest reliability without the need of an external operator and with minimal influence of subjects' fluctuating res...

  17. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  18. Sperry Univac speech communications technology

    Science.gov (United States)

    Medress, Mark F.

    1977-01-01

    Technology and systems for effective verbal communication with computers were developed. A continuous speech recognition system for verbal input, a word spotting system to locate key words in conversational speech, prosodic tools to aid speech analysis, and a prerecorded voice response system for speech output are described.

  19. Speech segmentation in aphasia.

    Science.gov (United States)

    Peñaloza, Claudia; Benetello, Annalisa; Tuomiranta, Leena; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria Carmen; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2015-01-01

    Speech segmentation is one of the initial and mandatory phases of language learning. Although some people with aphasia have shown a preserved ability to learn novel words, their speech segmentation abilities have not been explored. We examined the ability of individuals with chronic aphasia to segment words from running speech via statistical learning. We also explored the relationships between speech segmentation and aphasia severity, and short-term memory capacity. We further examined the role of lesion location in speech segmentation and short-term memory performance. The experimental task was first validated with a group of young adults (n = 120). Participants with chronic aphasia (n = 14) were exposed to an artificial language and were evaluated in their ability to segment words using a speech segmentation test. Their performance was contrasted against chance level and compared to that of a group of elderly matched controls (n = 14) using group and case-by-case analyses. As a group, participants with aphasia were significantly above chance level in their ability to segment words from the novel language and did not significantly differ from the group of elderly controls. Speech segmentation ability in the aphasic participants was not associated with aphasia severity although it significantly correlated with word pointing span, a measure of verbal short-term memory. Case-by-case analyses identified four individuals with aphasia who performed above chance level on the speech segmentation task, all with predominantly posterior lesions and mild fluent aphasia. Their short-term memory capacity was also better preserved than in the rest of the group. Our findings indicate that speech segmentation via statistical learning can remain functional in people with chronic aphasia and suggest that this initial language learning mechanism is associated with the functionality of the verbal short-term memory system and the integrity of the left inferior frontal region.

  20. Clinical validation of automated audiometry with continuous noise-monitoring in a clinically heterogeneous population outside a sound-treated environment.

    Science.gov (United States)

    Brennan-Jones, Christopher G; Eikelboom, Robert H; Swanepoel, De Wet; Friedland, Peter L; Atlas, Marcus D

    2016-09-01

    Examine the accuracy of automated audiometry in a clinically heterogeneous population of adults using the KUDUwave automated audiometer. Prospective accuracy study. Manual audiometry was performed in a sound-treated room and automated audiometry was not conducted in a sound-treated environment. 42 consecutively recruited participants from a tertiary otolaryngology department in Western Australia. Absolute mean differences ranged between 5.12-9.68 dB (air-conduction) and 8.26-15 dB (bone-conduction). A total of 86.5% of manual and automated 4FAs were within 10 dB (i.e. ±5 dB); 94.8% were within 15 dB. However, there were significant (p audiometry at 250, 500, 1000, and 2000 Hz (air-conduction) and 500 and 1000 Hz (bone-conduction). The effect of age (≥55 years) on accuracy (p = 0.014) was not significant on linear regression (p > 0.05; R(2) =( ) 0.11). The presence of a hearing loss (better ear ≥26 dB) did not significantly affect accuracy (p = 0.604; air-conduction), (p = 0.218; bone-conduction). This study provides clinical validation of automated audiometry using the KUDUwave in a clinically heterogeneous population, without the use of a sound-treated environment. Whilst threshold variations were statistically significant, future research is needed to ascertain the clinical significance of such variation.

  1. Speech processing in mobile environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book focuses on speech processing in the presence of low-bit rate coding and varying background environments. The methods presented in the book exploit the speech events which are robust in noisy environments. Accurate estimation of these crucial events will be useful for carrying out various speech tasks such as speech recognition, speaker recognition and speech rate modification in mobile environments. The authors provide insights into designing and developing robust methods to process the speech in mobile environments. Covering temporal and spectral enhancement methods to minimize the effect of noise and examining methods and models on speech and speaker recognition applications in mobile environments.

  2. APPRECIATING SPEECH THROUGH GAMING

    Directory of Open Access Journals (Sweden)

    Mario T Carreon

    2014-06-01

    Full Text Available This paper discusses the Speech and Phoneme Recognition as an Educational Aid for the Deaf and Hearing Impaired (SPREAD application and the ongoing research on its deployment as a tool for motivating deaf and hearing impaired students to learn and appreciate speech. This application uses the Sphinx-4 voice recognition system to analyze the vocalization of the student and provide prompt feedback on their pronunciation. The packaging of the application as an interactive game aims to provide additional motivation for the deaf and hearing impaired student through visual motivation for them to learn and appreciate speech.

  3. A controlled comparison of auditory steady-state responses and pure-tone audiometry in patients with hearing loss.

    Science.gov (United States)

    Wadhera, Raman; Hernot, Sharad; Gulati, Sat Paul; Kalra, Vijay

    2017-01-01

    We performed a prospective interventional study to evaluate correlations between hearing thresholds determined by pure-tone audiometry (PTA) and auditory steady-state response (ASSR) testing in two types of patients with hearing loss and a control group of persons with normal hearing. The study was conducted on 240 ears-80 ears with conductive hearing loss, 80 ears with sensorineural hearing loss, and 80 normal-hearing ears. We found that mean threshold differences between PTA results and ASSR testing at different frequencies did not exceed 15 dB in any group. Using Pearson correlation coefficient calculations, we determined that the two responses correlated better in patients with sensorineural hearing loss than in those with conductive hearing loss. We conclude that measuring ASSRs can be an excellent complement to other diagnostic methods in determining hearing thresholds.

  4. SPEECH PROCESSING –AN OVERVIEW

    OpenAIRE

    A.INDUMATHI; Dr.E.CHANDRA

    2012-01-01

    One of the earliest goals of speech processing was coding speech for efficient transmission. Later, the research spread in various area like Automatic Speech Recognition (ASR), Speech Synthesis (TTS),Speech Enhancement, Automatic Language Translation (ALT).Initially, ASR is used to recognize single words in a small vocabulary, later many product was developed for continuous speech for large vocabulary.Speech Synthesis is used for synthesizing the speech corresponding to a given text Speech Sy...

  5. Speech and Communication Disorders

    Science.gov (United States)

    Many disorders can affect our ability to speak and communicate. They range from saying sounds incorrectly to being completely ... to speak or understand speech. Causes include Hearing disorders and deafness Voice problems, such as dysphonia or ...

  6. Speech and Swallowing

    Science.gov (United States)

    ... of Smell Cognitive Changes Depression Fatigue Constipation & Nausea Hallucinations/Delusions Pain Skeletal & Bone Health Skin Changes Sleep Disorders Speech & Swallowing Problems Urinary Incontinence Vision Changes Weight Management Help Us Make a Difference ...

  7. Differential Diagnosis of Speech Sound Disorder (Phonological Disorder): Audiological Assessment beyond the Pure-tone Audiogram.

    Science.gov (United States)

    Iliadou, Vasiliki Vivian; Chermak, Gail D; Bamiou, Doris-Eva

    2015-04-01

    According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, diagnosis of speech sound disorder (SSD) requires a determination that it is not the result of other congenital or acquired conditions, including hearing loss or neurological conditions that may present with similar symptomatology. To examine peripheral and central auditory function for the purpose of determining whether a peripheral or central auditory disorder was an underlying factor or contributed to the child's SSD. Central auditory processing disorder clinic pediatric case reports. Three clinical cases are reviewed of children with diagnosed SSD who were referred for audiological evaluation by their speech-language pathologists as a result of slower than expected progress in therapy. Audiological testing revealed auditory deficits involving peripheral auditory function or the central auditory nervous system. These cases demonstrate the importance of increasing awareness among professionals of the need to fully evaluate the auditory system to identify auditory deficits that could contribute to a patient's speech sound (phonological) disorder. Audiological assessment in cases of suspected SSD should not be limited to pure-tone audiometry given its limitations in revealing the full range of peripheral and central auditory deficits, deficits which can compromise treatment of SSD. American Academy of Audiology.

  8. Comparison of middle latency responses in presbycusis patients with two different speech recognition scores.

    Science.gov (United States)

    Kirkim, Gunay; Madanoglu, Nevma; Akdas, Ferda; Serbetcioglu, M Bulent

    2007-12-01

    The purpose of this study is to evaluate whether the middle latency responses (MLR) can be used for an objective differentiation of patients with presbycusis having relatively good (Group I) and relatively poor speech recognition scores (Group II). All the participants of these groups had high frequency down-sloping hearing loss with an average of 26-60 dB HL. Data were collected from two described study groups and a control group, using pure tone audiometry, monosyllabic phonetically balanced word and synthetic sentence identification, as well as MLR. The study groups were compared with the control group. When patients in Group I were compared with the control group, only ipsilateral Na latency of middle latency evoked response was statistically significant in the right ear whereas ipsilateral Na latency in the right ear, ipsilateral and contralateral Na latency in the left ear of the patients in Group II were statistically significant. Thus, as an objective complementary tool for the evaluation of the speech perception ability of the patients with presbycusis, Na latency of MLR may be used in combination with the speech discrimination tests.

  9. Information based speech transduction

    OpenAIRE

    Juel Henrichsen, Peter

    2011-01-01

    Modern hearing aids use a variety of advanced digital signal processing methods in order to improve speech intelligibility. These methods are based on knowledge about the acoustics outside the ear as well as psychoacoustics. We present a novel observation based on the fact that acoustic prominence is not equal to information prominence for time intervals at the syllabic and sub-syllabic levels. The idea is that speech elements with a high degree of information can be robustly identified based...

  10. Charisma in business speeches

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Brem, Alexander; Novák-Tót, Eszter

    2016-01-01

    to business speeches. Consistent with the public opinion, our findings are indicative of Steve Jobs being a more charismatic speaker than Mark Zuckerberg. Beyond previous studies, our data suggest that rhythm and emphatic accentuation are also involved in conveying charisma. Furthermore, the differences...... between Steve Jobs and Mark Zuckerberg and the investor- and customer-related sections of their speeches support the modern understanding of charisma as a gradual, multiparametric, and context-sensitive concept....

  11. Yaounde French Speech Corpus

    Science.gov (United States)

    2017-03-01

    distribution unlimited. iii Contents List of Tables iiv 1 Introduction 1 2 Data 1 2.1 Collection Set-Up 1 2.2 Audio Data Specifications 1 2.3...6 Approved for public release; distribution unlimited. 1 1 Introduction The Yaounde French Speech (YFS) corpus contains speech...94. Have you ever had any broken bones? 95. What bones have you broken? 96. Do you have worms? 97. Do you have malaria ? 98. Do you have

  12. Speech perception and production

    Science.gov (United States)

    Casserly, Elizabeth D.; Pisoni, David B.

    2012-01-01

    Until recently, research in speech perception and speech production has largely focused on the search for psychological and phonetic evidence of discrete, abstract, context-free symbolic units corresponding to phonological segments or phonemes. Despite this common conceptual goal and intimately related objects of study, however, research in these two domains of speech communication has progressed more or less independently for more than 60 years. In this article, we present an overview of the foundational works and current trends in the two fields, specifically discussing the progress made in both lines of inquiry as well as the basic fundamental issues that neither has been able to resolve satisfactorily so far. We then discuss theoretical models and recent experimental evidence that point to the deep, pervasive connections between speech perception and production. We conclude that although research focusing on each domain individually has been vital in increasing our basic understanding of spoken language processing, the human capacity for speech communication is so complex that gaining a full understanding will not be possible until speech perception and production are conceptually reunited in a joint approach to problems shared by both modes. PMID:23946864

  13. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    Science.gov (United States)

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014

  14. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  15. Musician advantage for speech-on-speech perception.

    Science.gov (United States)

    Başkent, Deniz; Gaudrain, Etienne

    2016-03-01

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level auditory cognitive functions, such as attention. Indeed, despite the few non-musicians who performed as well as musicians, on a group level, there was a strong musician benefit for speech perception in a speech masker. This benefit does not seem to result from better voice processing and could instead be related to better stream segregation or enhanced cognitive functions.

  16. Computer-based speech therapy for childhood speech sound disorders.

    Science.gov (United States)

    Furlong, Lisa; Erickson, Shane; Morris, Meg E

    2017-07-01

    With the current worldwide workforce shortage of Speech-Language Pathologists, new and innovative ways of delivering therapy to children with speech sound disorders are needed. Computer-based speech therapy may be an effective and viable means of addressing service access issues for children with speech sound disorders. To evaluate the efficacy of computer-based speech therapy programs for children with speech sound disorders. Studies reporting the efficacy of computer-based speech therapy programs were identified via a systematic, computerised database search. Key study characteristics, results, main findings and details of computer-based speech therapy programs were extracted. The methodological quality was evaluated using a structured critical appraisal tool. 14 studies were identified and a total of 11 computer-based speech therapy programs were evaluated. The results showed that computer-based speech therapy is associated with positive clinical changes for some children with speech sound disorders. There is a need for collaborative research between computer engineers and clinicians, particularly during the design and development of computer-based speech therapy programs. Evaluation using rigorous experimental designs is required to understand the benefits of computer-based speech therapy. The reader will be able to 1) discuss how computerbased speech therapy has the potential to improve service access for children with speech sound disorders, 2) explain the ways in which computer-based speech therapy programs may enhance traditional tabletop therapy and 3) compare the features of computer-based speech therapy programs designed for different client populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  18. Audiometria de alta freqüência em adultos jovens e mais velhos quando a audiometria convencional é normal High-frequency audiometry in young and older adults when conventional audiometry is normal

    Directory of Open Access Journals (Sweden)

    Isabella Monteiro de Castro Silva

    2006-10-01

    Full Text Available A audiometria de alta freqüência é capaz de detectar precocemente alterações em sensibilidade advindas de processos como o envelhecimento. Seu uso é limitado, o que recomenda estudos para esclarecer seu desempenho, especialmente entre adultos de mais idade. OBJETIVO: Comparar os limiares para as freqüências de 250Hz a 16kHz, entre adultos jovens e mais velhos normoacúsicos, com e sem queixa audiológica. CASUÍSTICA E MÉTODO: A sensibilidade a tons puros de 250Hz a 16kHz foi avaliada com audiômetro AC-40, em 64 adultos, igualmente distribuídos: jovens (25 a 35 anos e mais velhos (45 a 55 anos de ambos os gêneros, com forma de estudo de coorte transversal. RESULTADOS: Os adultos mais velhos apresentaram limiares mais elevados em todas as freqüências, mais significativamente nas mais altas (8 a 16kHz, quando comparados com os adultos jovens. Homens apresentaram limiares mais elevados do que mulheres entre 3 e 10kHz. CONCLUSÃO: O processo de envelhecimento auditivo, envolvendo perda de sensibilidade auditiva para altas freqüências, pode ser detectado em idades anteriores às tipicamente pesquisadas, uma vez que a audiometria de alta freqüência demonstrou ser instrumento importante para distinguir a sensibilidade auditiva entre adultos jovens e mais velhos, quando audiologicamente normais.High-frequency audiometry can detect early changes in auditory sensitivity resulting from processes such as aging. Nonetheless its use is still limited, and additional studies are required to establish its use, particularly among older adults. AIM: To compare pure tone thresholds for frequencies from 250 Hz to 16 kHz in young and older adults, with or without audiologic complaints. METHOD: Pure tone sensitivity to 250 Hz to 16 kHz was assessed with an AC-40 audiometer in 64 adults, evenly distributed in young (25 to 35 years-old and older (45 to 55 years-old adults of both sexes. This is a cross-sectional study. RESULTS: Although all

  19. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  20. Hearing performance in single-sided deaf cochlear implant users after upgrade to a single-unit speech processor.

    Science.gov (United States)

    Mertens, Griet; Hofkens, Anouk; Punte, Andrea Kleine; De Bodt, Marc; Van de Heyning, Paul

    2015-01-01

    Single-sided deaf (SSD) patients report multiple benefits after cochlear implantation (CI), such as tinnitus suppression, speech perception, and sound localization. The first single-unit speech processor, the RONDO, was launched recently. Both the RONDO and the well-known behind-the-ear (BTE) speech processor work on the same audio processor platform. However, in contrast to the BTE, the microphone placement on the RONDO is different. The aim of this study was to evaluate the hearing performances using the BTE speech processor versus using the single-unit speech processor. Subjective and objective outcomes in SSD CI patients with a BTE speech processor and a single-unit speech processor, with particular focus on spatial hearing, were compared. Ten adults with unilateral incapacitating tinnitus resulting from ipsilateral sensorineural deafness were enrolled in the study. The mean age at enrollment in the study was 56 (standard deviation, 13) years. The subjects were cochlear implanted at a mean age of 48 (standard deviation, 14) years and had on average 8 years' experience with their CI (range, 4-11 yr). At the first test interval (T0), testing was conducted using the subject's BTE speech processor, with which they were already familiar. Aided free-field audiometry, speech reception in noise, and sound localization testing were performed. Self-administered questionnaires on subjective evaluation consisted of HISQUI-NL, SSQ5, SHQ, and a Visual Analogue Scale to assess tinnitus loudness and disturbance. All 10 subjects were upgraded to the single-unit processor and retested after 28 days (T28) with the same fitting map. At T28, an additional single-unit questionnaire was administered to determine qualitative experiences and the effect of the position of the microphone on the new speech processor. Equal hearing outcomes were found between the single-unit speech processor: median PTA(single-unit) (0.5, 1, 2 kHz) = 40 (range, 33-48) dB HL; median Speech Reception

  1. Automatic speech recognition An evaluation of Google Speech

    OpenAIRE

    Stenman, Magnus

    2015-01-01

    The use of speech recognition is increasing rapidly and is now available in smart TVs, desktop computers, every new smart phone, etc. allowing us to talk to computers naturally. With the use in home appliances, education and even in surgical procedures accuracy and speed becomes very important. This thesis aims to give an introduction to speech recognition and discuss its use in robotics. An evaluation of Google Speech, using Google’s speech API, in regards to word error rate and translation ...

  2. The Galker test of speech reception in noise; associations with background variables, middle ear status, hearing, and language in Danish preschool children.

    Science.gov (United States)

    Lauritsen, Maj-Britt Glenn; Söderström, Margareta; Kreiner, Svend; Dørup, Jens; Lous, Jørgen

    2016-01-01

    We tested "the Galker test", a speech reception in noise test developed for primary care for Danish preschool children, to explore if the children's ability to hear and understand speech was associated with gender, age, middle ear status, and the level of background noise. The Galker test is a 35-item audio-visual, computerized word discrimination test in background noise. Included were 370 normally developed children attending day care center. The children were examined with the Galker test, tympanometry, audiometry, and the Reynell test of verbal comprehension. Parents and daycare teachers completed questionnaires on the children's ability to hear and understand speech. As most of the variables were not assessed using interval scales, non-parametric statistics (Goodman-Kruskal's gamma) were used for analyzing associations with the Galker test score. For comparisons, analysis of variance (ANOVA) was used. Interrelations were adjusted for using a non-parametric graphic model. In unadjusted analyses, the Galker test was associated with gender, age group, language development (Reynell revised scale), audiometry, and tympanometry. The Galker score was also associated with the parents' and day care teachers' reports on the children's vocabulary, sentence construction, and pronunciation. Type B tympanograms were associated with a mean hearing 5-6dB below that of than type A, C1, or C2. In the graphic analysis, Galker scores were closely and significantly related to Reynell test scores (Gamma (G)=0.35), the children's age group (G=0.33), and the day care teachers' assessment of the children's vocabulary (G=0.26). The Galker test of speech reception in noise appears promising as an easy and quick tool for evaluating preschool children's understanding of spoken words in noise, and it correlated well with the day care teachers' reports and less with the parents' reports. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Differential Diagnosis of Severe Speech Disorders Using Speech Gestures

    Science.gov (United States)

    Bahr, Ruth Huntley

    2005-01-01

    The differentiation of childhood apraxia of speech from severe phonological disorder is a common clinical problem. This article reports on an attempt to describe speech errors in children with childhood apraxia of speech on the basis of gesture use and acoustic analyses of articulatory gestures. The focus was on the movement of articulators and…

  4. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  5. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  6. Maria Montessori on Speech Education

    Science.gov (United States)

    Stern, David A.

    1973-01-01

    Montessori's theory of education, as related to speech communication skills learning, is explored for insights into speech and language acquisition, pedagogical procedure for teaching spoken vocabulary, and the educational environment which encourages children's free interaction and confidence in communication. (CH)

  7. Development of a speech autocuer

    Science.gov (United States)

    Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; Mccartney, M. L.

    1980-01-01

    A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.

  8. Silent Speech and Silent Reading

    Science.gov (United States)

    Chopra, Pran

    1971-01-01

    Examines the possibility of a relationship between the level of intelligence of the reader and the amount of silent speech as well as between the occurrence of silent speech and the level of comprehension. Charts, tables, statistics. (Author/RB)

  9. Why Go to Speech Therapy?

    Science.gov (United States)

    ... Preschoolers Parents of School-Age Children Just for Kids Teens Adults Teachers Speech-Language Pathologists Physicians Employers Email ... Preschoolers Parents of School-Age Children Just for Kids Teens Adults Teachers Speech-Language Pathologists Physicians Employers Tweet ...

  10. Automatic speech recognition systems

    Science.gov (United States)

    Catariov, Alexandru

    2005-02-01

    In this paper is presented analyses in automatic speech recognition (ASR) to find out what is the state of the arts in this direction and, eventually, it can be a starting point for the implementation of a real ASR system. In the second chapter of this work, it is revealed the structure of a typical speech recognition system and the used methods for each step of the recognition process, and in special, there are described two kinds of speech recognition algorithms, namely, Dynamic Time Warping (DTW) and Hidden Markov Model (HMM). The work continues with some results of ASR, in order to make conclusions about what is needed to be improved and what is more eligible to implement an ASR system.

  11. RECOGNISING SPEECH ACTS

    Directory of Open Access Journals (Sweden)

    Phyllis Kaburise

    2012-09-01

    Full Text Available Speech Act Theory (SAT, a theory in pragmatics, is an attempt to describe what happens during linguistic interactions. Inherent within SAT is the idea that language forms and intentions are relatively formulaic and that there is a direct correspondence between sentence forms (for example, in terms of structure and lexicon and the function or meaning of an utterance. The contention offered in this paper is that when such a correspondence does not exist, as in indirect speech utterances, this creates challenges for English second language speakers and may result in miscommunication. This arises because indirect speech acts allow speakers to employ various pragmatic devices such as inference, implicature, presuppositions and context clues to transmit their messages. Such devices, operating within the non-literal level of language competence, may pose challenges for ESL learners.

  12. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Information is carried in changes of a signal. The paper starts with revisiting Dudley's concept of the carrier nature of speech. It points to its close connection to modulation spectra of speech and argues against short-term spectral envelopes as dominant carriers of the linguistic information in speech. The history of spectral ...

  13. "Zero Tolerance" for Free Speech.

    Science.gov (United States)

    Hils, Lynda

    2001-01-01

    Argues that school policies of "zero tolerance" of threatening speech may violate a student's First Amendment right to freedom of expression if speech is less than a "true threat." Suggests a two-step analysis to determine if student speech is a "true threat." (PKP)

  14. The University and Free Speech

    OpenAIRE

    Grcic, Joseph

    2014-01-01

    Free speech is a necessary condition for the growth of knowledge and the implementation of real and rational democracy. Educational institutions play a central role in socializing individuals to function within their society. Academic freedom is the right to free speech in the context of the university and tenure, properly interpreted, is a necessary component of protecting academic freedom and free speech.

  15. Speech Communication and Signal Processing

    Indian Academy of Sciences (India)

    Communicating with a machine in a natural mode such as speech brings out not only several technological challenges, but also limitations in our understanding of how people communicate so effortlessly. The key is to understand the distinction between speech processing (as is done in human communication) and speech ...

  16. Speech Acts and Rhetorical Action.

    Science.gov (United States)

    Steinmann, Martin, Jr.

    Some authorities wish to persuasively define rhetoric as formal or public speeches, while others hold to a more traditional and broader definition pointing to the relevance of speech-act theory to support them. Central to speech-act theory is the illocutionary act (uttering at least one sentence of some language under certain conditions). Two…

  17. Speech Acts and Conversational Interaction.

    Science.gov (United States)

    Geis, Michael L.

    This book unites speech act theory and conversation analysis to advance a theory of conversational competence, called the Dynamic Speech Act Theory (DSAT). In contrast to traditional speech act theory that focuses almost exclusively on intuitive assessments of isolated, constructed examples, this theory is predicated on the assumption that speech…

  18. If Speech is Your Problem. . . .

    Science.gov (United States)

    Community and Junior College Journal, 1975

    1975-01-01

    Chabot College has developed a new speech therapy program including individual and group remedial instruction for students with speech problems as well as course work for those students seeking information about the field of communicative disorders. The program operates in a newly developed speech and hearing center. (DC)

  19. Active Duty-U.S. Army Noise Induced Hearing Injury Quarterly Surveillance Q3 2011 thru Q4 2013

    Science.gov (United States)

    2014-06-30

    analysis. Source: Defense Medical Surveillance System (DMSS). Prepared by Armed Forces Health Surveillance Center (AFHSC). RESULTS : Results are shown...Used in the Data Summaries AUDIO CPT codes 92552 PURE TONE AUDIOMETRY (THRESHOLD); AIR ONLY AUDIO CPT codes 92555 SPEECH AUDIOMETRY THRESHOLD; AUDIO CPT...codes 92556 SPEECH AUDIOMETRY THRESHOLD; WITH SPEECH RECOGNITION AUDIO CPT codes 92557 COMPREHENSIVE AUDIOMETRY THRESHOLD EVALUATION AND SPEECH

  20. Speech & Language Therapy for Children and Adolescents with Down Syndrome

    Science.gov (United States)

    ... NDSS Contact NDSS > Resources > Speech & Language Therapy Speech & Language Therapy Speech and language development can be challenging ... and adolescents progress in speech and language. Speech & Language Therapy for Infants, Toddlers & Young Children Speech and ...

  1. ONT: Speech Communication.

    Science.gov (United States)

    1987-02-27

    nasality accent by which all important speech sounds are characterized as nasal or non-nasal (4). In some languages, such as Hindi or Gujarati (5), some...so far by Professor Stevens and his group are Hindi, Gujarati , Bengali, Portuguese, English, and French. French listeners in the perceptual study

  2. Speech therapy after thyroidectomy.

    Science.gov (United States)

    Yu, Wing-Hei Viola; Wu, Che-Wei

    2017-10-01

    Common complaints of patients who have received thyroidectomy include dysphonia (voice dysfunction) and dysphagia (difficulty swallowing). One cause of these surgical outcomes is recurrent laryngeal nerve paralysis. Many studies have discussed the effectiveness of speech therapy (e.g., voice therapy and dysphagia therapy) for improving dysphonia and dysphagia, but not specifically in patients who have received thyroidectomy. Therefore, the aim of this paper was to discuss issues regarding speech therapy such as voice therapy and dysphagia for patients after thyroidectomy. Another aim was to review the literature on speech therapy for patients with recurrent laryngeal nerve paralysis after thyroidectomy. Databases used for the literature review in this study included, PubMed, MEDLINE, Academic Search Primer, ERIC, CINAHL Plus, and EBSCO. The articles retrieved by database searches were classified and screened for relevance by using EndNote. Of the 936 articles retrieved, 18 discussed "voice assessment and thyroidectomy", 3 discussed "voice therapy and thyroidectomy", and 11 discussed "surgical interventions for voice restoration after thyroidectomy". Only 3 studies discussed topics related to "swallowing function assessment/treatment and thyroidectomy". Although many studies have investigated voice changes and assessment methods in thyroidectomy patients, few recent studies have investigated speech therapy after thyroidectomy. Additionally, some studies have addressed dysphagia after thyroidectomy, but few have discussed assessment and treatment of dysphagia after thyroidectomy.

  3. Expectations and speech intelligibility.

    Science.gov (United States)

    Babel, Molly; Russell, Jamie

    2015-05-01

    Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs.

  4. Speech After Banquet

    Science.gov (United States)

    Yang, Chen Ning

    2013-05-01

    I am usually not so short of words, but the previous speeches have rendered me really speechless. I have known and admired the eloquence of Freeman Dyson, but I did not know that there is a hidden eloquence in my colleague George Sterman...

  5. Hearing speech in music

    Directory of Open Access Journals (Sweden)

    Seth-Reino Ekström

    2011-01-01

    Full Text Available The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA noise and speech spectrum-filtered noise (SPN]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA. The results showed a significant effect of piano performance speed and octave (P<.01. Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01 and SPN (P<.05. Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01, but there were smaller differences between masking conditions (P<.01. It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  6. Speech Understanding Systems

    Science.gov (United States)

    1975-08-01

    word n»y occur. In addition, it nega es the need to store separately R parameterization for eac . entry in the lexicon. Time norma .- vzation is done...34Minimur Prediction Residual Principle Applied tc Spee ’.h Recognition," IEEE Trans, or Audio, Speech, and Signal 0ro^ssing, Vol. ASSP-23. Schwartz, R

  7. Speech is Golden

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter

    2014-01-01

    Most of the Danish municipalities are ready to begin to adopt automatic speech recognition, but at the same time remain nervous following a long series of bad business cases in the recent past. Complaints are voiced over costly licences and low service levels, typical effects of a de facto monopoly...

  8. Affordable headphones for accessible screening audiometry: An evaluation of the Sennheiser HD202 II supra-aural headphone.

    Science.gov (United States)

    Van der Aerschot, Mathieu; Swanepoel, De Wet; Mahomed-Asmail, Faheema; Myburgh, Herman Carel; Eikelboom, Robert Henry

    2016-11-01

    Evaluation of the Sennheiser HD 202 II supra-aural headphones as an alternative headphone to enable more affordable hearing screening. Study 1 measured the equivalent threshold sound pressure levels (ETSPL) of the Sennheiser HD 202 II. Study 2 evaluated the attenuation of the headphones. Study 3 determined headphone characteristics by analyzing the total harmonic distortion (THD), frequency response and force of the headband. Twenty-five participants were included in study 1 and 15 in study 2 with ages ranging between 18 and 25. No participants were involved in study 3. The Sennheiser HD 202 II ETSPLs (250-16000 Hz) showed no significant effects on ETSPL for ear laterality, gender or age. Attenuation was not significantly different (p > 0.01) to TDH 39 except at 8000 Hz (p 3%. Sennheiser HD 202 II supra-aural headphones can be used as an affordable headphone for screening audiometry provided reported MPANLs, maximum intensities and ETSPL values are employed.

  9. Relationship between pure tone audiometry and tone burst auditory brainstem response at low frequencies gated with Blackman window.

    Science.gov (United States)

    Canale, Andrea; Dagna, Federico; Lacilla, Michelangelo; Piumetto, Elena; Albera, Roberto

    2012-03-01

    To assess the reliability of Blackman windowed tone burst auditory brainstem response (ABR) as a predictor of hearing threshold at low frequencies. Fifty-six subjects were divided in to three groups (normal hearing, conductive hearing loss, sensorineural hearing loss) after pure tone audiometry (PTA) testing. Then they underwent tone burst ABR using Blackman windowed stimuli at 0.5 kHz and 1 kHz. Results were compared with PTA threshold. Mean threshold differences between PTA and ABR ranged between 11 dB at 0.5 kHz and 14 dB at 1 kHz. ABR threshold was worse than PTA in each but 2 cases. Mean discrepancy between the two thresholds was about 20 dB in normal hearing, reducing in presence of hearing loss, without any differences in conductive and sensorineural cases. Tone burst ABR is a good predictor of hearing threshold at low frequencies, in case of suspected hearing loss. Further studies are recommended to evaluate an ipsilateral masking such as notched noise to ensure greater frequency specificity.

  10. Reconstruction of speech from whispers.

    Science.gov (United States)

    Morris, Robert W; Clements, Mark A

    2002-01-01

    This paper investigates a method for the real-time reconstruction of normal speech from whispers. This system could be used by aphonic individuals as a voice prosthesis. It could also provide improved verbal communication when normal speech is not appropriate. The normal speech is synthesized using the mixed excitation linear prediction model. Differences between whispered and phonated speech are discussed and methods for estimating the parameters of this model from whispered speech for real-time synthesis are proposed. This includes smoothing the noisy linear prediction spectra, modifying the formants, and synthesizing of the excitation signal. Trade-offs between computational complexity, delay, and accuracy of different methods are discussed.

  11. THE PRESENCE OF ADENOID VEGETATIONS AND NASAL SPEECH, AND HEARING LOSS IN RELATION TO SECRETORY OTITIS MEDIA

    Directory of Open Access Journals (Sweden)

    Gabriela KOPACHEVA

    2004-12-01

    Full Text Available This study presents the treatment of 68 children with secretory otitis media. Children underwent adenoid vegetations, nasal speech, conductive hearing loss, ventilation disturbance in Eustachian tube. In all children adenoidectomy was indicated.38 boys and 30 girls at the age of 3-17 were divided in two main groups: * 29 children without hypertrophic (enlarged adenoids, * 39 children with enlarged (hypertrophic adenoids.The surgical treatment included insertion of ventilation tubes and adenoidectomy where there where hypertrophic adenoids.Clinical material was analyzed according to hearing threshold, hearing level, middle ear condition estimated by pure tone audiometry and tympanometry before and after treatment. Data concerning both groups were compared.The results indicated that adenoidectomy combined with the ventilation tubes facilitates secretory otitis media heeling as well as decrease of hearing impairments. That enables prompt restoration of the hearing function as an important precondition for development of the language, social, emotional and academic development of children.

  12. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  13. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  14. Conversation, speech acts, and memory.

    Science.gov (United States)

    Holtgraves, Thomas

    2008-03-01

    Speakers frequently have specific intentions that they want others to recognize (Grice, 1957). These specific intentions can be viewed as speech acts (Searle, 1969), and I argue that they play a role in long-term memory for conversation utterances. Five experiments were conducted to examine this idea. Participants in all experiments read scenarios ending with either a target utterance that performed a specific speech act (brag, beg, etc.) or a carefully matched control. Participants were more likely to falsely recall and recognize speech act verbs after having read the speech act version than after having read the control version, and the speech act verbs served as better recall cues for the speech act utterances than for the controls. Experiment 5 documented individual differences in the encoding of speech act verbs. The results suggest that people recognize and retain the actions that people perform with their utterances and that this is one of the organizing principles of conversation memory.

  15. Relationship between speech motor control and speech intelligibility in children with speech sound disorders.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Pukonen, Margit; Goshulak, Debra; Yu, Vickie Y; Kadis, Darren S; Kroll, Robert; Pang, Elizabeth W; De Nil, Luc F

    2013-01-01

    The current study was undertaken to investigate the impact of speech motor issues on the speech intelligibility of children with moderate to severe speech sound disorders (SSD) within the context of the PROMPT intervention approach. The word-level Children's Speech Intelligibility Measure (CSIM), the sentence-level Beginner's Intelligibility Test (BIT) and tests of speech motor control and articulation proficiency were administered to 12 children (3:11 to 6:7 years) before and after PROMPT therapy. PROMPT treatment was provided for 45 min twice a week for 8 weeks. Twenty-four naïve adult listeners aged 22-46 years judged the intelligibility of the words and sentences. For CSIM, each time a recorded word was played to the listeners they were asked to look at a list of 12 words (multiple-choice format) and circle the word while for BIT sentences, the listeners were asked to write down everything they heard. Words correctly circled (CSIM) or transcribed (BIT) were averaged across three naïve judges to calculate percentage speech intelligibility. Speech intelligibility at both the word and sentence level was significantly correlated with speech motor control, but not articulatory proficiency. Further, the severity of speech motor planning and sequencing issues may potentially be a limiting factor in connected speech intelligibility and highlights the need to target these issues early and directly in treatment. The reader will be able to: (1) outline the advantages and disadvantages of using word- and sentence-level speech intelligibility tests; (2) describe the impact of speech motor control and articulatory proficiency on speech intelligibility; and (3) describe how speech motor control and speech intelligibility data may provide critical information to aid treatment planning. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. [Hearing Loss and Speech Recognition in the Elderly].

    Science.gov (United States)

    von Gablenz, Petra; Holube, Inga

    2017-11-01

    Elderly people often complain about poor speech understanding in noisy environments. In clinical practice, speech tests under noise conditions are used to examine hearing ability. The HÖRSTAT study, conducted on a population-based random sample consisting of 1903 adults, used the Goettingen Sentence Test (GÖSA) under noise conditions along with pure-tone audiometry. Hearing impairment was defined as pure-tone average at 0.5, 1, 2 and 4 kHz (PTA-4) greater than 25 dB HL in the better ear (WHO criterion). As expected, pure-tone thresholds and speech recognition thresholds (SRT) in GÖSA worsened steadily with age. For a comparison of PTA-4, SRTGÖSA and self-reported hearing, analysis was limited to 553 adults aged 60-85 years with PTA-4 below 50 dB HL and SRTs measured with a constant 65 dB SPL noise level. The percentage of hearing-impaired increased from 13 % in the 60-65 year-old people to 60 % in those aged 80-85 years. Overall, 68 % of the 60-85 years adults showed normal hearing in terms of unimpaired hearing according to the WHO criterion. The SRTGÖSA of 66 % of the elderly adults with normal hearing, however, did not lie within the reference range established with young normal hearing subjects in the HÖRSTAT study (4.8 ± 1.8 dB SNR, mean±2 * standard deviation). Among the 553 elderly, only 24 % reached this reference range. PTA-4 and SRTGÖSA results showed moderate to good correlations (Pearson r = 0.562, within 5-years bands: 0.372-0.514). From PTA-4 ≥ 30 dB HL and SRTGÖSA ≥- 2 dB SNR, respectively, more than half of the subjects reported hearing difficulties. Despite the continuous decline of PTA-4 and SRTGÖSA with age, the proportion of self-reported hearing difficulties as well as the self-rated hearing ability score stagnated. From the age of 70 years onwards, the elderly in the HÖRSTAT sample tend to overestimate their hearing abilities and to underestimate their difficulties. Georg Thieme Verlag KG Stuttgart · New York.

  17. The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech.

    Science.gov (United States)

    Crosse, Michael J; Lalor, Edmund C

    2014-04-01

    Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.

  18. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  19. Sensorimotor Interactions in Speech Learning

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    2011-10-01

    Full Text Available Auditory input is essential for normal speech development and plays a key role in speech production throughout the life span. In traditional models, auditory input plays two critical roles: 1 establishing the acoustic correlates of speech sounds that serve, in part, as the targets of speech production, and 2 as a source of feedback about a talker's own speech outcomes. This talk will focus on both of these roles, describing a series of studies that examine the capacity of children and adults to adapt to real-time manipulations of auditory feedback during speech production. In one study, we examined sensory and motor adaptation to a manipulation of auditory feedback during production of the fricative “s”. In contrast to prior accounts, adaptive changes were observed not only in speech motor output but also in subjects' perception of the sound. In a second study, speech adaptation was examined following a period of auditory–perceptual training targeting the perception of vowels. The perceptual training was found to systematically improve subjects' motor adaptation response to altered auditory feedback during speech production. The results of both studies support the idea that perceptual and motor processes are tightly coupled in speech production learning, and that the degree and nature of this coupling may change with development.

  20. Headphone localization of speech

    Science.gov (United States)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1993-01-01

    Three-dimensional acoustic display systems have recently been developed that synthesize virtual sound sources over headphones based on filtering by head-related transfer functions (HRTFs), the direction-dependent spectral changes caused primarily by the pinnae. In this study, 11 inexperienced subjects judged the apparent spatial location of headphone-presented speech stimuli filtered with nonindividualized HRTFs. About half of the subjects 'pulled' their judgments toward either the median or the lateral-vertical planes, and estimates were almost always elevated. Individual differences were pronounced for the distance judgments; 15 to 46 percent of stimuli were heard inside the head, with the shortest estimates near the median plane. The results suggest that most listeners can obtain useful azimuth information from speech stimuli filtered by nonindividualized HRTFs. Measurements of localization error and reversal rates are comparable with a previous study that used broadband noise stimuli.

  1. Variation and Synthetic Speech

    CERN Document Server

    Miller, C; Massey, N; Miller, Corey; Karaali, Orhan; Massey, Noel

    1997-01-01

    We describe the approach to linguistic variation taken by the Motorola speech synthesizer. A pan-dialectal pronunciation dictionary is described, which serves as the training data for a neural network based letter-to-sound converter. Subsequent to dictionary retrieval or letter-to-sound generation, pronunciations are submitted a neural network based postlexical module. The postlexical module has been trained on aligned dictionary pronunciations and hand-labeled narrow phonetic transcriptions. This architecture permits the learning of individual postlexical variation, and can be retrained for each speaker whose voice is being modeled for synthesis. Learning variation in this way can result in greater naturalness for the synthetic speech that is produced by the system.

  2. Annotating Speech Corpus for Prosody Modeling in Indian Language Text to Speech Systems

    OpenAIRE

    Kiruthiga S; Krishnamoorthy K

    2012-01-01

    A spoken language system, it may either be a speech synthesis or a speech recognition system, starts with building a speech corpora. We give a detailed survey of issues and a methodology that selects the appropriate speech unit in building a speech corpus for Indian language Text to Speech systems. The paper ultimately aims to improve the intelligibility of the synthesized speech in Text to Speech synthesis systems. To begin with, an appropriate text file should be selected for building the s...

  3. Trainable Videorealistic Speech Animation

    Science.gov (United States)

    2006-01-01

    in movies; virtual avatars in chatrooms; very low bitrate coding schemes (such as MPEG4); and studies of visual speech production and perception . The...audiovisual corpus of a human subject uttering various utter- ances was recorded. Recording was performed at a TV studio against a blue “ chroma -key... lighting conditions, and 3) changes in viewpoint. All these limi- tations can be alleviated by extending our approach from 2D to 3D. It is possible to

  4. Hate Speech: Power in the Marketplace.

    Science.gov (United States)

    Harrison, Jack B.

    1994-01-01

    A discussion of hate speech and freedom of speech on college campuses examines the difference between hate speech from normal, objectionable interpersonal comments and looks at Supreme Court decisions on the limits of student free speech. Two cases specifically concerning regulation of hate speech on campus are considered: Chaplinsky v. New…

  5. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  6. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy What's ... with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech disorder refers ...

  7. IBM MASTOR SYSTEM: Multilingual Automatic Speech-to-speech Translator

    National Research Council Canada - National Science Library

    Gao, Yuqing; Gu, Liang; Zhou, Bowen; Sarikaya, Ruhi; Afify, Mohamed; Kuo, Hong-Kwang; Zhu, Wei-zhong; Deng, Yonggang; Prosser, Charles; Zhang, Wei

    2006-01-01

    .... Challenges include speech recognition and machine translation in adverse environments, lack of training data and linguistic resources for under-studied languages, and the need to rapidly develop...

  8. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  9. Neurophysiology of Speech Differences in Childhood Apraxia of Speech

    Science.gov (United States)

    Preston, Jonathan L.; Molfese, Peter J.; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes. PMID:25090016

  10. Speech rhythm: a metaphor?

    Science.gov (United States)

    Nolan, Francis; Jeon, Hae-Sung

    2014-12-19

    Is speech rhythmic? In the absence of evidence for a traditional view that languages strive to coordinate either syllables or stress-feet with regular time intervals, we consider the alternative that languages exhibit contrastive rhythm subsisting merely in the alternation of stronger and weaker elements. This is initially plausible, particularly for languages with a steep 'prominence gradient', i.e. a large disparity between stronger and weaker elements; but we point out that alternation is poorly achieved even by a 'stress-timed' language such as English, and, historically, languages have conspicuously failed to adopt simple phonological remedies that would ensure alternation. Languages seem more concerned to allow 'syntagmatic contrast' between successive units and to use durational effects to support linguistic functions than to facilitate rhythm. Furthermore, some languages (e.g. Tamil, Korean) lack the lexical prominence which would most straightforwardly underpin prominence of alternation. We conclude that speech is not incontestibly rhythmic, and may even be antirhythmic. However, its linguistic structure and patterning allow the metaphorical extension of rhythm in varying degrees and in different ways depending on the language, and it is this analogical process which allows speech to be matched to external rhythms. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. An internet-based hearing test for simple audiometry in nonclinical settings: preliminary validation and proof of principle.

    Science.gov (United States)

    Honeth, Louise; Bexelius, Christin; Eriksson, Mikael; Sandin, Sven; Litton, Jan-Eric; Rosenhall, Ulf; Nyrén, Olof; Bagger-Sjöbäck, Dan

    2010-07-01

    To investigate the validity and reproducibility of a newly developed internet-based self-administered hearing test using clinical pure-tone air-conducted audiometry as gold standard. Cross-sectional intrasubject comparative study. Karolinska University Hospital, Solna, Sweden. Seventy-two participants (79% women) with mean age of 45 years (range, 19-71 yr). Twenty participants had impaired hearing according to the gold standard test. Hearing tests. The Pearson correlation coefficient between the results of the studied Internet-based hearing test and the gold standard test, the greatest mean differences in decibel between the 2 tests over tested frequencies, sensitivity and specificity to diagnose hearing loss defined by Heibel-Lidén, and test-retest reproducibility with the Pearson correlation coefficient. The Pearson correlation coefficient was 0.94 (p < 0.0001) for the right ear and 0.93 for the left (p = 0.0001). The greatest mean differences were seen for the frequencies 2 and 4 kHz, with -5.6 dB (standard deviation, 8.29), and -5.1 dB (standard deviation, 6.9), respectively. The 75th percentiles of intraindividual test-gold standard differences did not exceed -10 dB for any of the frequencies. The sensitivity for hearing loss was 75% (95% confidence interval, 51%-90%), and the specificity was 96% (95% confidence interval, 86%-99%). The test-retest reproducibility was excellent, with a Pearson correlation coefficient of 0.99 (p < 0.0001) for both ears. It is possible to assess hearing with reasonable accuracy using an Internet-based hearing test on a personal computer with headphones. The practical viability of self-administration in participants' homes needs further evaluation.

  12. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  13. Speech and the Right Hemisphere

    Directory of Open Access Journals (Sweden)

    E. M. R. Critchley

    1991-01-01

    Full Text Available Two facts are well recognized: the location of the speech centre with respect to handedness and early brain damage, and the involvement of the right hemisphere in certain cognitive functions including verbal humour, metaphor interpretation, spatial reasoning and abstract concepts. The importance of the right hemisphere in speech is suggested by pathological studies, blood flow parameters and analysis of learning strategies. An insult to the right hemisphere following left hemisphere damage can affect residual language abilities and may activate non-propositional inner speech. The prosody of speech comprehension even more so than of speech production—identifying the voice, its affective components, gestural interpretation and monitoring one's own speech—may be an essentially right hemisphere task. Errors of a visuospatial type may occur in the learning process. Ease of learning by actors and when learning foreign languages is achieved by marrying speech with gesture and intonation, thereby adopting a right hemisphere strategy.

  14. Sensitivity of cortical auditory evoked potential detection for hearing-impaired infants in response to short speech sounds

    Directory of Open Access Journals (Sweden)

    Bram Van Dun

    2012-01-01

    Full Text Available

    Background: Cortical auditory evoked potentials (CAEPs are an emerging tool for hearing aid fitting evaluation in young children who cannot provide reliable behavioral feedback. It is therefore useful to determine the relationship between the sensation level of speech sounds and the detection sensitivity of CAEPs.

    Design and methods: Twenty-five sensorineurally hearing impaired infants with an age range of 8 to 30 months were tested once, 18 aided and 7 unaided. First, behavioral thresholds of speech stimuli /m/, /g/, and /t/ were determined using visual reinforcement orientation audiometry (VROA. Afterwards, the same speech stimuli were presented at 55, 65, and 75 dB SPL, and CAEP recordings were made. An automatic statistical detection paradigm was used for CAEP detection.

    Results: For sensation levels above 0, 10, and 20 dB respectively, detection sensitivities were equal to 72 ± 10, 75 ± 10, and 78 ± 12%. In 79% of the cases, automatic detection p-values became smaller when the sensation level was increased by 10 dB.

    Conclusions: The results of this study suggest that the presence or absence of CAEPs can provide some indication of the audibility of a speech sound for infants with sensorineural hearing loss. The detection of a CAEP provides confidence, to a degree commensurate with the detection probability, that the infant is detecting that sound at the level presented. When testing infants where the audibility of speech sounds has not been established behaviorally, the lack of a cortical response indicates the possibility, but by no means a certainty, that the sensation level is 10 dB or less.

  15. Sensitivity of cortical auditory evoked potential detection for hearing-impaired infants in response to short speech sounds

    Directory of Open Access Journals (Sweden)

    Bram Van Dun

    2012-08-01

    Full Text Available Cortical auditory evoked potentials (CAEPs are an emerging tool for hearing aid fitting evaluation in young children who cannot provide reliable behavioral feedback. It is therefore useful to determine the relationship between the sensation level of speech sounds and the detection sensitivity of CAEPs, which is the ratio between the number of detections and the sum of detections and non-detections. Twenty-five sensorineurally hearing impaired infants with an age range of 8 to 30 months were tested once, 18 aided and 7 unaided. First, behavioral thresholds of speech stimuli /m/, /g/, and /t/ were determined using visual reinforcement orientation audiometry. Afterwards, the same speech stimuli were presented at 55, 65, and 75 dB sound pressure level, and CAEPs were recorded. An automatic statistical detection paradigm was used for CAEP detection. For sensation levels above 0, 10, and 20 dB respectively, detection sensitivities were equal to 72±10, 75±10, and 78±12%. In 79% of the cases, automatic detection P-values became smaller when the sensation level was increased by 10 dB. The results of this study suggest that the presence or absence of CAEPs can provide some indication of the audibility of a speech sound for infants with sensorineural hearing loss. The detection of a CAEP might provide confidence, to a degree commensurate with the detection probability, that the infant is detecting that sound at the level presented. When testing infants where the audibility of speech sounds has not been established behaviorally, the lack of a cortical response indicates the possibility, but by no means a certainty, that the sensation level is 10 dB or less.

  16. The intelligibility of pointillistic speech

    OpenAIRE

    Kidd, Gerald; Timothy M. Streeter; Ihlefeld, Antje; Maddox, Ross K.; Mason, Christine R.

    2009-01-01

    A form of processed speech is described that is highly discriminable in a closed-set identification format. The processing renders speech into a set of sinusoidal pulses played synchronously across frequency. The processing and results from several experiments are described. The number and width of frequency analysis channels and tone-pulse duration were variables. In one condition, various proportions of the tones were randomly removed. The processed speech was remarkably resilient to these ...

  17. Abortion and compelled physician speech.

    Science.gov (United States)

    Orentlicher, David

    2015-01-01

    Informed consent mandates for abortion providers may infringe the First Amendment's freedom of speech. On the other hand, they may reinforce the physician's duty to obtain informed consent. Courts can promote both doctrines by ensuring that compelled physician speech pertains to medical facts about abortion rather than abortion ideology and that compelled speech is truthful and not misleading. © 2015 American Society of Law, Medicine & Ethics, Inc.

  18. Phonetic Consequences of Speech Disfluency

    National Research Council Canada - National Science Library

    Shriberg, Elizabeth E

    1999-01-01

    .... Analyses of American English show that disfluency affects a variety of phonetic aspects of speech, including segment durations, intonation, voice quality, vowel quality, and coarticulation patterns...

  19. Psychotic speech: a neurolinguistic perspective.

    Science.gov (United States)

    Anand, A; Wales, R J

    1994-06-01

    The existence of an aphasia-like language disorder in psychotic speech has been the subject of much debate. This paper argues that a discrete language disorder could be an important cause of the disturbance seen in psychotic speech. A review is presented of classical clinical descriptions and experimental studies that have explored the similarities between psychotic language impairment and aphasic speech. The paper proposes neurolinguistic tasks which may be used in future studies to elicit subtle language impairments in psychotic speech. The usefulness of a neurolinguistic model for further research in the aetiology and treatment of psychosis is discussed.

  20. Speech Recognition on Mobile Devices

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    The enthusiasm of deploying automatic speech recognition (ASR) on mobile devices is driven both by remarkable advances in ASR technology and by the demand for efficient user interfaces on such devices as mobile phones and personal digital assistants (PDAs). This chapter presents an overview of ASR...... in the mobile context covering motivations, challenges, fundamental techniques and applications. Three ASR architectures are introduced: embedded speech recognition, distributed speech recognition and network speech recognition. Their pros and cons and implementation issues are discussed. Applications within...... command and control, text entry and search are presented with an emphasis on mobile text entry....

  1. The use of high-frequency audiometry increases the diagnosis of asymptomatic hearing loss in pediatric patients treated with cisplatin-based chemotherapy.

    Science.gov (United States)

    Abujamra, Ana Lucia; Escosteguy, Juliana Ribas; Dall'Igna, Celso; Manica, Denise; Cigana, Luciana Facchini; Coradini, Patrícia; Brunetto, André; Gregianin, Lauro José

    2013-03-01

    Cisplatin may cause permanent cochlear damage by changing cochlear frequency selectivity and can lead to irreversible sensorineural hearing loss. High-frequency audiometry (HFA) is able to assess hearing frequencies above 8,000 Hz; hence, it has been considered a high-quality method to monitor and diagnose early and asymptomatic signs of ototoxicity in patients receiving cisplatin. Forty-two pediatric patients were evaluated for hearing loss induced by cisplatin utilizing HFA, and its diagnostic efficacy was compared to that of standard pure-tone audiometry and distortion-product otoacoustic emissions (DPOAEs). The patient population consisted of those who signed an informed consent form and had received cisplatin chemotherapy between 1991 and 2008 at the Hospital de Clínicas de Porto Alegre Pediatric Unit, Brazil. Forty-two patients were evaluated. The median age at study assessment was 14.5 years (range 4-37 years). Hearing loss was detected in 24 patients (57%) at conventional frequencies. Alterations of DPOAEs were found in 64% of evaluated patients and hearing loss was observed in 36 patients (86%) when high-frequency test was added. The mean cisplatin dose was significantly higher (P = 0.046) for patients with hearing impairment at conventional frequencies. The results suggest that HFA is more effective than pure-tone audiometry and DPOAEs in detecting hearing loss, particularly at higher frequencies. It may be a useful tool for testing new otoprotective agents, beside serving as an early diagnostic method for detecting hearing impairment. Copyright © 2012 Wiley Periodicals, Inc.

  2. The motor theory of speech perception revisited

    National Research Council Canada - National Science Library

    Massaro, Dominic W; Chen, Trevor H

    2008-01-01

    .... We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruited for perceiving speech, and that speech perception can be adequately described...

  3. Speech Recognition: How Do We Teach It?

    Science.gov (United States)

    Barksdale, Karl

    2002-01-01

    States that growing use of speech recognition software has made voice writing an essential computer skill. Describes how to present the topic, develop basic speech recognition skills, and teach speech recognition outlining, writing, proofreading, and editing. (Contains 14 references.) (SK)

  4. Speech and Language Problems in Children

    Science.gov (United States)

    Children vary in their development of speech and language skills. Health care professionals have lists of milestones ... it may be due to a speech or language disorder. Children who have speech disorders may have ...

  5. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  6. [A method of speech donorship and speech discourse for the speech restoration in aphasia].

    Science.gov (United States)

    Rudnev, V A; Shteĭnerdt, V V

    2012-01-01

    An objective of the study was to evaluate the effectiveness of speech restoration in aphasia in outpatients using audiovisual samples of the speech of first-degree relatives of the patient with the following transformation of the restoration into the feedback with the own audiovisual material (a method of speech donorship and speech discourse). We studied 53 outpatients with different severity of aphasia (28 patients with moderate severity, 12 patients with mild severity and 13 patients with marked severity) that was pathogenetically associated with stroke or brain injury. We used the following algorithm of speech restoration: 1) the work in the regime of biological feedback with the audiovisual sample of the speech of the close relative (7th-14th days); 2) the DVD recording of the own speech of the patient and the work with the own audiovisual sample (14th-21st days). Sessions were carried out twice a day. After the rehabilitation, there was a significant improvement (pspeech function including the decrease in the frequency of literal and verbal paraphasias, literal perseverations as well as the improvement of speech initiation and nonverbal speech component (intonation and kinesthetic appearances). The results of the restoration were worse in patients with severe aphasia than in those with moderate and mild aphasia, for the latter patients the method was very effective.

  7. Speech production of preschoolers with cleft palate

    National Research Council Canada - National Science Library

    Hardin-Jones, Mary A; Jones, David L

    2005-01-01

    The present investigation was conducted to examine the prevalence of preschoolers with cleft palate who require speech therapy, demonstrate significant nasalization of speech, and produce compensatory articulations...

  8. Teaching Speech Acts

    Directory of Open Access Journals (Sweden)

    Teaching Speech Acts

    2007-01-01

    Full Text Available In this paper I argue that pragmatic ability must become part of what we teach in the classroom if we are to realize the goals of communicative competence for our students. I review the research on pragmatics, especially those articles that point to the effectiveness of teaching pragmatics in an explicit manner, and those that posit methods for teaching. I also note two areas of scholarship that address classroom needs—the use of authentic data and appropriate assessment tools. The essay concludes with a summary of my own experience teaching speech acts in an advanced-level Portuguese class.

  9. Methods of Teaching Speech Recognition

    Science.gov (United States)

    Rader, Martha H.; Bailey, Glenn A.

    2010-01-01

    Objective: This article introduces the history and development of speech recognition, addresses its role in the business curriculum, outlines related national and state standards, describes instructional strategies, and discusses the assessment of student achievement in speech recognition classes. Methods: Research methods included a synthesis of…

  10. Visualizing structures of speech expressiveness

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Jensen, Karl Kristoffer; Graugaard, Lars

    2008-01-01

    vowels and consonants, and which converts the speech energy into visual particles that form complex visual structures, provides us with a mean to present the expressiveness of speech into a visual mode. This system is presented in an artwork whose scenario is inspired from the reasons of language...

  11. Indirect speech acts in English

    OpenAIRE

    Василина, Владимир Николаевич

    2013-01-01

    The article deals with indirect speech acts in Englishspeaking discourse. Different approaches to their analysis and the reasons for their use are discussed. It is argued that the choice of the form of speech actsdepends on the parameters of communicative partners.

  12. Speech Prosody in Cerebellar Ataxia

    Science.gov (United States)

    Casper, Maureen A.; Raphael, Lawrence J.; Harris, Katherine S.; Geibel, Jennifer M.

    2007-01-01

    Persons with cerebellar ataxia exhibit changes in physical coordination and speech and voice production. Previously, these alterations of speech and voice production were described primarily via perceptual coordinates. In this study, the spatial-temporal properties of syllable production were examined in 12 speakers, six of whom were healthy…

  13. Confessions of a Speech Pathologist.

    Science.gov (United States)

    Alpern, Ramona Lenny

    1984-01-01

    A speech therapist discusses an approach that combines articulation and language skills beginning with sounds and progressing through multisensory activities to words, phrases, sentences, controlled conversation, and free-flowing conversation. The approach uses therapy based in speech therapy's historical foundations. (CL)

  14. Speech Restoration: An Interactive Process

    Science.gov (United States)

    Grataloup, Claire; Hoen, Michael; Veuillet, Evelyne; Collet, Lionel; Pellegrino, Francois; Meunier, Fanny

    2009-01-01

    Purpose: This study investigates the ability to understand degraded speech signals and explores the correlation between this capacity and the functional characteristics of the peripheral auditory system. Method: The authors evaluated the capability of 50 normal-hearing native French speakers to restore time-reversed speech. The task required them…

  15. Creating speech-synchronized animation.

    Science.gov (United States)

    King, Scott A; Parent, Richard E

    2005-01-01

    We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements.

  16. Perceptual Learning of Interrupted Speech

    NARCIS (Netherlands)

    Benard, Michel Ruben; Başkent, Deniz

    2013-01-01

    The intelligibility of periodically interrupted speech improves once the silent gaps are filled with noise bursts. This improvement has been attributed to phonemic restoration, a top-down repair mechanism that helps intelligibility of degraded speech in daily life. Two hypotheses were investigated

  17. Non-speech oral motor treatment for developmental speech sound disorders in children (Review)

    OpenAIRE

    Lee, Alice S.; Gibbon, Fiona E.

    2015-01-01

    Background: Children with developmental speech sound disorders have difficulties in producing the speech sounds of their native language. These speech difficulties could be due to structural, sensory or neurophysiological causes (e.g. hearing impairment), butmore often the cause of the problem is unknown. One treatment approach used by speech-language therapists/pathologists is non-speech oral motor treatment (NSOMT). NSOMTs are non-speech activities that aim to stimulate or improve speech pr...

  18. On speech recognition during anaesthesia

    DEFF Research Database (Denmark)

    Alapetite, Alexandre

    2007-01-01

    This PhD thesis in human-computer interfaces (informatics) studies the case of the anaesthesia record used during medical operations and the possibility to supplement it with speech recognition facilities. Problems and limitations have been identified with the traditional paper-based anaesthesia...... interface with speech input facilities in Danish. The evaluation of the new interface was carried out in a full-scale anaesthesia simulator. This has been complemented by laboratory experiments on several aspects of speech recognition for this type of use, e.g. the effects of noise on speech recognition...... accuracy. Finally, the last part of the thesis looks at the acceptance and success of a speech recognition system introduced in a Danish hospital to produce patient records....

  19. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  20. Development of a novel Italian speech-in-noise test using a roving-level adaptive method: adult population-based normative data.

    Science.gov (United States)

    Canzi, P; Manfrin, M; Locatelli, G; Nopp, P; Perotti, M; Benazzo, M

    2016-12-01

    In recent years the increasing development of hearing devices has led to a critical analysis of the standard methods employed to evaluate hearing function. Being too far from reality, conventional investigation of hearing loss based on pure-tone threshold audiometry and on mono/disyllabic word lists, presented in quiet conditions, has been shown to be inadequate. A speech-in-noise test using a roving-level adaptive method employs target and competing signals varying in level in order to reproduce everyday life speaking conditions and explore a more complete sound range. Up to now, only few roving-level adaptive tests have been published in the literature. We conducted a rovinglevel adaptive test in healthy Italian adults to produce new normative data on a language of Latin origin. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  1. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  2. Freedom of Speech Newsletter, February 1976.

    Science.gov (United States)

    Allen, Winfred G., Jr., Ed.

    The "Freedom of Speech Newsletter" is the communication medium, published four times each academic year, of the Freedom of Speech Interest Group, Western Speech Communication Association. Articles included in this issue are "What Is Academic Freedom For?" by Ralph Ross, "A Sociology of Free Speech" by Ray Heidt,…

  3. Coevolution of Human Speech and Trade

    NARCIS (Netherlands)

    Horan, R.D.; Bulte, E.H.; Shogren, J.F.

    2008-01-01

    We propose a paleoeconomic coevolutionary explanation for the origin of speech in modern humans. The coevolutionary process, in which trade facilitates speech and speech facilitates trade, gives rise to multiple stable trajectories. While a `trade-speech¿ equilibrium is not an inevitable outcome for

  4. The acquisition of speech and language.

    Science.gov (United States)

    Woodfield, T A

    1999-01-01

    There are many theories behind speech and language acquisition. The role of parents in social interaction with their infant to facilitate speech and language acquisition is of paramount importance. Several pathological influences may hinder speech and language acquisition. Children's nurses need knowledge and understanding of how speech and language are acquired.

  5. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  6. Infant Perception of Atypical Speech Signals

    Science.gov (United States)

    Vouloumanos, Athena; Gelfand, Hanna M.

    2013-01-01

    The ability to decode atypical and degraded speech signals as intelligible is a hallmark of speech perception. Human adults can perceive sounds as speech even when they are generated by a variety of nonhuman sources including computers and parrots. We examined how infants perceive the speech-like vocalizations of a parrot. Further, we examined how…

  7. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2000-10-19

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  8. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2004-04-20

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  9. Binary Masking & Speech Intelligibility

    DEFF Research Database (Denmark)

    Boldt, Jesper

    experiments under ideal conditions or as experiments under more realistic conditions useful for real-life applications such as hearing aids. In the experiments under ideal conditions, the previously defined ideal binary mask is evaluated using hearing impaired listeners, and a novel binary mask -- the target...... binary mask -- is introduced. The target binary mask shows the same substantial increase in intelligibility as the ideal binary mask and is proposed as a new reference for binary masking. In the category of real-life applications, two new methods are proposed: a method for estimation of the ideal binary...... mask using a directional system and a method for correcting errors in the target binary mask. The last part of the thesis, proposes a new method for objective evaluation of speech intelligibility....

  10. Steganalysis of recorded speech

    Science.gov (United States)

    Johnson, Micah K.; Lyu, Siwei; Farid, Hany

    2005-03-01

    Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.

  11. Theater, Speech, Light

    Directory of Open Access Journals (Sweden)

    Primož Vitez

    2011-07-01

    Full Text Available This paper considers a medium as a substantial translator: an intermediary between the producers and receivers of a communicational act. A medium is a material support to the spiritual potential of human sources. If the medium is a support to meaning, then the relations between different media can be interpreted as a space for making sense of these meanings, a generator of sense: it means that the interaction of substances creates an intermedial space that conceives of a contextualization of specific meaningful elements in order to combine them into the sense of a communicational intervention. The theater itself is multimedia. A theatrical event is a communicational act based on a combination of several autonomous structures: text, scenography, light design, sound, directing, literary interpretation, speech, and, of course, the one that contains all of these: the actor in a human body. The actor is a physical and symbolic, anatomic, and emblematic figure in the synesthetic theatrical act because he reunites in his body all the essential principles and components of theater itself. The actor is an audio-visual being, made of kinetic energy, speech, and human spirit. The actor’s body, as a source, instrument, and goal of the theater, becomes an intersection of sound and light. However, theater as intermedial art is no intermediate practice; it must be seen as interposing bodies between conceivers and receivers, between authors and auditors. The body is not self-evident; the body in contemporary art forms is being redefined as a privilege. The art needs bodily dimensions to explore the medial qualities of substances: because it is alive, it returns to studying biology. The fact that theater is an archaic art form is also the purest promise of its future.

  12. Speech enhancement theory and practice

    CERN Document Server

    Loizou, Philipos C

    2013-01-01

    With the proliferation of mobile devices and hearing devices, including hearing aids and cochlear implants, there is a growing and pressing need to design algorithms that can improve speech intelligibility without sacrificing quality. Responding to this need, Speech Enhancement: Theory and Practice, Second Edition introduces readers to the basic problems of speech enhancement and the various algorithms proposed to solve these problems. Updated and expanded, this second edition of the bestselling textbook broadens its scope to include evaluation measures and enhancement algorithms aimed at impr

  13. Computational neuroanatomy of speech production.

    Science.gov (United States)

    Hickok, Gregory

    2012-01-05

    Speech production has been studied predominantly from within two traditions, psycholinguistics and motor control. These traditions have rarely interacted, and the resulting chasm between these approaches seems to reflect a level of analysis difference: whereas motor control is concerned with lower-level articulatory control, psycholinguistics focuses on higher-level linguistic processing. However, closer examination of both approaches reveals a substantial convergence of ideas. The goal of this article is to integrate psycholinguistic and motor control approaches to speech production. The result of this synthesis is a neuroanatomically grounded, hierarchical state feedback control model of speech production.

  14. Freedom of speech in Rome

    OpenAIRE

    Díaz de Valdés,José Manuel

    2009-01-01

    This paper reflects on the existence and exercise of freedom of speech in Rome. After asserting that Romans considered free speech as part of the liberties provided by the Republican regime, it is affirmed that it was not regarded as a human right but as a political entitlement. As nowadays, freedom of speech was valued not only for its importance to the speaker, but also for its relevance to the political system. The paper states that during the Republic, this right was intensively exercised...

  15. Visual speech influences speech perception immediately but not automatically.

    Science.gov (United States)

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  16. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  17. Mapping acoustics to kinematics in speech

    Science.gov (United States)

    Bali, Rohan

    An accurate mapping from speech acoustics to speech articulator movements has many practical applications, as well as theoretical implications of speech planning and perception science. This work can be divided into two parts. In the first part, we show that a simple codebook can be used to map acoustics to speech articulator movements in natural, conversational speech. In the second part, we incorporate cost optimization principles that have been shown to be relevant in motor control tasks into the codebook approach. These cost optimizations are defined as minimization of integral of magnitude velocity, acceleration and jerk of the speech articulators, and are implemented using a dynamic programming technique. Results show that incorporating cost minimization of speech articulator movements can significantly improve mapping acoustics to speech articulator movements. This suggests underlying physiological or neural planning principles used by speech articulators during speech production.

  18. Speech of people with autism: Echolalia and echolalic speech

    OpenAIRE

    Błeszyński, Jacek Jarosław

    2013-01-01

    Speech of people with autism is recognised as one of the basic diagnostic, therapeutic and theoretical problems. One of the most common symptoms of autism in children is echolalia, described here as being of different types and severity. This paper presents the results of studies into different levels of echolalia, both in normally developing children and in children diagnosed with autism, discusses the differences between simple echolalia and echolalic speech - which can be considered to b...

  19. Perceived Speech Quality Estimation Using DTW Algorithm

    OpenAIRE

    S. Arsenovski; Z. Gacovski; S. Chungurski; I. Kraljevski

    2009-01-01

    In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame) loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their corre...

  20. A speech reception in noise test for preschool children (the Galker-test): Validity, reliability and acceptance.

    Science.gov (United States)

    Lauritsen, Maj-Britt Glenn; Kreiner, Svend; Söderström, Margareta; Dørup, Jens; Lous, Jørgen

    2015-10-01

    This study evaluates initial validity and reliability of the "Galker test of speech reception in noise" developed for Danish preschool children suspected to have problems with hearing or understanding speech against strict psychometric standards and assesses acceptance by the children. The Galker test is an audio-visual, computerised, word discrimination test in background noise, originally comprised of 50 word pairs. Three hundred and eighty eight children attending ordinary day care centres and aged 3-5 years were included. With multiple regression and the Rasch item response model it was examined whether the total score of the Galker test validly reflected item responses across subgroups defined by sex, age, bilingualism, tympanometry, audiometry and verbal comprehension. A total of 370 children (95%) accepted testing and 339 (87%) completed all 50 items. The analysis showed that 35 items fitted the Rasch model. Reliability was 0.75 before and after exclusion of the 15 non-fitting items. In the stepwise linear regression model age group of children could explain 20% of the variation in Galker-35-score, sex 1%, second language at home 4%, tympanometry in best ear 2%, and parental education another 2%. Other variable did not reach significance. The Galker-35 was well accepted by children down to the age of 3 years and results indicate that the scale represents construct valid and reliable measurement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Delayed Speech or Language Development

    Science.gov (United States)

    ... huge gains in their child's speech. A toddler's vocabulary should increase (to too many words to count) ... and language skills within the context of total development. The pathologist will do standardized tests and look ...

  3. Hearing and Speech at Seven

    Science.gov (United States)

    Sheridan, Mary D.; Peckham, Catherine S.

    1973-01-01

    Evaluated for social and educational aspects at 7 years of age were 133 children with moderate hearing loss, 46 children with severe unilateral hearing loss, and 215 children with normal hearing but with unintelligible speech. (DB)

  4. Why Go to Speech Therapy?

    Science.gov (United States)

    ... to change over time or for emotions and attitudes about your speech to change as you have new experiences. It is important for you to have a clear idea about your motivation for going to therapy because your reasons for ...

  5. Intercultural Communication and Speech Style

    OpenAIRE

    Haase, Fee-Alexandra

    2005-01-01

    In her article, "Intercultural Communication and Speech Style," Fee-Alexandra Haase discusses intercultural communication as a concept for the production and analysis of speeches and written texts. Starting with a theoretical and historical perspective, Haase exemplifies selected intercultural patterns found in different cultures. Further, based on definitions of style in rhetoric from different cultural backgrounds from the ancient Greek culture up to modern approaches of rhetoricians, Haase...

  6. Designing speech for a recipient

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    This study asks how speakers adjust their speech to their addressees, focusing on the potential roles of cognitive representations such as partner models, automatic processes such as interactive alignment, and social processes such as interactional negotiation. The nature of addressee orientation......, psycholinguistics and conversation analysis, and offers both overviews of child-directed, foreigner-directed and robot-directed speech and in-depth analyses of the processes involved in adjusting to a communication partner....

  7. Research of speech recognition methods

    OpenAIRE

    Prokopovič, Valerij

    2005-01-01

    Two speech recognition methods: Dynamic Time Warping and Hidden Markov model based methods were investigated in this work To estimate efficiency of the methods, speaker dependent and speaker independent isolated word recognition experiments were performed. During experimental research it was determined that Dynamic Time Warping method is suitable only for speaker dependent speech recognition. Hidden Markov model based method is suitable for both – speaker dependent and speaker independent spe...

  8. Articulatory representation and speech technology.

    Science.gov (United States)

    Schmidbauer, O; Casacuberta, F; Castro, M J; Hegerl, G; Höge, H; Sanchez, J A; Zlokarnik, I

    1993-01-01

    In this paper we demonstrate the feasibility and usefulness of articulation-based approaches in two major areas of speech technology: speech recognition and speech synthesis. Our articulatory recognition model estimates probabilities of categories of manner and place of articulation, which establish the articulatory feature vector. The transformation from the articulatory level to the symbolic level is performed by hidden Markov models or multi-layer perceptions. Evaluations show that the articulatory approach is a good basis for speaker-independent and speaker-adaptive speech recognition. We are now working on a more realistic articulatory model for speech recognition. An algorithm based on an analysis by synthesis model maps the acoustic signal to 10 articulatory parameters which describe the position of the articulators. EMA (electro-magnetic articulograph) measurements recorded at the University of Munich provide good initial estimates of tongue coordinates. In order to improve articulatory speech synthesis we investigated an accurate physical model for the generation of the glottal source with the aid of a numerical simulation. This model takes into account nonlinear vortical flow and its interaction with sound-waves. The simulation results can be used to improve the articulatory synthesis model developed by Ishizaka and Flanagan (1972).

  9. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  10. The accuracy and reliability of an app-based audiometer using consumer headphones: pure tone audiometry in a normal hearing group.

    Science.gov (United States)

    Corry, Megan; Sanders, Michael; Searchfield, Grant D

    2017-09-01

    To undertake a preliminary evaluation of the test-retest reliability, and accuracy of an iPad audiometer app using commercial earphones as a low-cost alternative to a clinical audiometer in a restricted sample of normal hearing participants. Twenty participants self-reporting normal hearing undertook four pure-tone audiometry tests in a single session. Two tests were performed with a 2-channel Type 1 audiometer (GSI-61) using EAR insert earphones and two tests with an iPad based app (Audiogram Mobile) using Apple earbud headphones. Twenty normal hearing participants (13 female and seven male participants, aged 21-26 years) were recruited for the test-retest and accuracy evaluations. The app resulted in different thresholds to the audiometer (F(1, 19) = 16.635, p headphones need to be addressed before such combinations can be used with confidence.

  11. Novel Techniques for Dialectal Arabic Speech Recognition

    CERN Document Server

    Elmahdy, Mohamed; Minker, Wolfgang

    2012-01-01

    Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and M...

  12. Neural bases of accented speech perception

    Directory of Open Access Journals (Sweden)

    Patti eAdank

    2015-10-01

    Full Text Available The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Adank, Evans, Stuart-Smith, & Scott, 2009; Floccia, Goslin, Girard, & Konopczynski, 2006. Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented speech, are beginning to be identified. This review will outline neural bases associated with perception of accented speech in the light of current models of speech perception, and compare these data to brain areas associated with processing other speech distortions. We will subsequently evaluate competing models of speech processing with regards to neural processing of accented speech. See Cristia et al. (2012 for an in-depth overview of behavioural aspects of accent processing.

  13. Neural bases of accented speech perception.

    Science.gov (United States)

    Adank, Patti; Nuttall, Helen E; Banks, Briony; Kennedy-Higgins, Daniel

    2015-01-01

    The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Floccia et al., 2006; Adank et al., 2009). Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented speech, are beginning to be identified. This review will outline neural bases associated with perception of accented speech in the light of current models of speech perception, and compare these data to brain areas associated with processing other speech distortions. We will subsequently evaluate competing models of speech processing with regards to neural processing of accented speech. See Cristia et al. (2012) for an in-depth overview of behavioral aspects of accent processing.

  14. Elderly perception of speech from a computer

    Science.gov (United States)

    Black, Alan; Eskenazi, Maxine; Simmons, Reid

    2002-05-01

    An aging population still needs to access information, such as bus schedules. It is evident that they will be doing so using computers and especially interfaces using speech input and output. This is a preliminary study to the use of synthetic speech for the elderly. In it twenty persons between the ages of 60 and 80 were asked to listen to speech emitted by a robot (CMU's VIKIA) and to write down what they heard. All of the speech was natural prerecorded speech (not synthetic) read by one female speaker. There were four listening conditions: (a) only speech emitted, (b) robot moves before emitting speech, (c) face has lip movement during speech, (d) both (b) and (c). There were very few errors for conditions (b), (c), and (d), but errors existed for condition (a). The presentation will discuss experimental conditions, show actual figures and try to draw conclusions for speech communication between computers and the elderly.

  15. Using click-evoked auditory brainstem response thresholds in infants to estimate the corresponding pure-tone audiometry thresholds in children referred from UNHS.

    Science.gov (United States)

    Lu, Tsun-Min; Wu, Fang-Wei; Chang, Hsiuwen; Lin, Hung-Ching

    2017-04-01

    To examine whether behavioral pure-tone audiometry (PTA) thresholds in children can be accurately estimated from the corresponding infants' click-evoked auditory brainstem response (ABR) thresholds through a retrospective review of data from a universal newborn hearing screening (UNHS) program in Taiwan. According to medical records from Mackay Memorial Hospital, Taipei Hospital District, 45,450 newborns received hearing screening during January 1999-December 2011. Among these newborns, 104 (82, both ears; 22, one ear; total, 186 ears) received regular follow-up and were recruited as subjects. The relationship between infant click-evoked ABR thresholds and the corresponding child PTA thresholds was determined through Pearson correlation coefficient and linear regression analyses. The correlation coefficient between click-evoked ABR thresholds and behavioral PTA thresholds at the average of frequencies of 1-4 and 2-4 kHz was 0.76 and 0.76, respectively. Linear regression analysis showed that behavioral audiometry thresholds at the average of frequencies of 1-4 and 2-4 kHz were accurately estimated from click-evoked ABR thresholds in 57% and 58% children, respectively. Click-evoked ABR testing is a reliable tool to cautiously estimate behavioral PTA thresholds at the average of frequencies of 1-4 and 2-4 kHz. For accurately performing hearing aid fitting and auditory rehabilitation in congenitally deaf infants, a combination of frequency-specific tone-burst ABR and click-evoked ABR should be used. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  17. Neural pathways for visual speech perception.

    Science.gov (United States)

    Bernstein, Lynne E; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  18. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  19. Neural pathways for visual speech perception

    Science.gov (United States)

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  20. Speech perception of noise with binary gains

    DEFF Research Database (Denmark)

    Wang, DeLiang; Kjems, Ulrik; Pedersen, Michael Syskind

    2008-01-01

    For a given mixture of speech and noise, an ideal binary time-frequency mask is constructed by comparing speech energy and noise energy within local time-frequency units. It is observed that listeners achieve nearly perfect speech recognition from gated noise with binary gains prescribed by the i......For a given mixture of speech and noise, an ideal binary time-frequency mask is constructed by comparing speech energy and noise energy within local time-frequency units. It is observed that listeners achieve nearly perfect speech recognition from gated noise with binary gains prescribed...

  1. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  2. Detection of target phonemes in spontaneous and read speech

    NARCIS (Netherlands)

    Mehta, G.; Cutler, A.

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ

  3. Performance Pressure Enhances Speech Learning

    Science.gov (United States)

    Maddox, W. Todd; Koslov, Seth; Yi, Han-Gyol; Chandrasekaran, Bharath

    2015-01-01

    Real-world speech learning often occurs in high pressure situations such as trying to communicate in a foreign country. However, the impact of pressure on speech learning success is largely unexplored. In this study, adult, native speakers of English learned non-native speech categories under pressure or no-pressure conditions. In the pressure conditions, participants were informed that they were paired with a (fictitious) partner, and that each had to independently exceed a performance criterion for both to receive a monetary bonus. They were then informed that their partner had exceeded the bonus and the fate of both bonuses depended upon the participant’s performance. Our results demonstrate that pressure significantly enhanced speech learning success. In addition, neurobiologically-inspired computational modeling revealed that the performance advantage was due to faster and more frequent use of procedural learning strategies. These results integrate two well-studied research domains and suggest a facilitatory role of motivational factors in speech learning performance that may not be captured in traditional training paradigms. PMID:28077883

  4. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  5. Speech input interfaces for anaesthesia records

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Andersen, Henning Boje

    2009-01-01

    Speech recognition as a medical transcript tool is now common in hospitals and is steadily increasing......Speech recognition as a medical transcript tool is now common in hospitals and is steadily increasing...

  6. Speech Recognition: Its Place in Business Education.

    Science.gov (United States)

    Szul, Linda F.; Bouder, Michele

    2003-01-01

    Suggests uses of speech recognition devices in the classroom for students with disabilities. Compares speech recognition software packages and provides guidelines for selection and teaching. (Contains 14 references.) (SK)

  7. Modeling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Dau, Torsten

    2012-01-01

    by the normal as well as impaired auditory system. Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII......) in conditions with nonlinearly processed speech. Instead of considering the reduction of the temporal modulation energy as the intelligibility metric, as assumed in the STI, the sEPSM applies the signal-to-noise ratio in the envelope domain (SNRenv). This metric was shown to be the key for predicting...... the intelligibility of reverberant speech as well as noisy speech processed by spectral subtraction. However, the sEPSM cannot account for speech subjected to phase jitter, a condition in which the spectral structure of speech is destroyed, while the broadband temporal envelope is kept largely intact. In contrast...

  8. The Prosodic Components of Speech Melody.

    Science.gov (United States)

    Martin, Howard R.

    1981-01-01

    Defines speech melody, with special attention to the distinction between its prosodic and paralinguistic domains. Discusses the role of the prosodic characteristics (stress, center, juncture, pitch direction, pitch height, utterance unit, and utterance group) in producing meaning in speech. (JMF)

  9. Represented Speech in Qualitative Health Research

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2017-01-01

    Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

  10. Speech Act Theory and Business Communication Conventions.

    Science.gov (United States)

    Ewald, Helen Rothschild; Stine, Donna

    1983-01-01

    Applies speech act theory to business writing to determine why certain letters and memos succeed while others fail. Specifically, shows how speech act theorist H. P. Grice's rules or maxims illuminate the writing process in business communication. (PD)

  11. DISODERS IN THE SPEECH DEVELOPMENT EARLY DETECTION AND TREATMENT

    Directory of Open Access Journals (Sweden)

    Vasilka RAZMOVSKA

    1998-09-01

    Full Text Available Introduction;· Causes for disorders in the speech development;· Disorders in the speech development, mental retardation and treatment;· Disorders in the speech development, hearing remainders and treatment;· Autism and disorders in the speech development;· Bilingual and disordered speech development;· Speech of neglected children

  12. DISODERS IN THE SPEECH DEVELOPMENT EARLY DETECTION AND TREATMENT

    OpenAIRE

    Vasilka RAZMOVSKA; Vasilka DOLEVSKA

    1998-01-01

    Introduction;· Causes for disorders in the speech development;· Disorders in the speech development, mental retardation and treatment;· Disorders in the speech development, hearing remainders and treatment;· Autism and disorders in the speech development;· Bilingual and disordered speech development;· Speech of neglected children

  13. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  14. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  15. The treatment of apraxia of speech : Speech and music therapy, an innovative joint effort

    NARCIS (Netherlands)

    Hurkmans, Josephus Johannes Stephanus

    2016-01-01

    Apraxia of Speech (AoS) is a neurogenic speech disorder. A wide variety of behavioural methods have been developed to treat AoS. Various therapy programmes use musical elements to improve speech production. A unique therapy programme combining elements of speech therapy and music therapy is called

  16. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

    Science.gov (United States)

    Drijvers, Linda; Ozyurek, Asli

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method:…

  17. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  18. Inner Speech's Relationship with Overt Speech in Poststroke Aphasia

    Science.gov (United States)

    Stark, Brielle C.; Geva, Sharon; Warburton, Elizabeth A.

    2017-01-01

    Purpose: Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech…

  19. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  20. THE ONTOGENESIS OF SPEECH DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. E. Braudo

    2017-01-01

    Full Text Available The purpose of this article is to acquaint the specialists, working with children having developmental disorders, with age-related norms for speech development. Many well-known linguists and psychologists studied speech ontogenesis (logogenesis. Speech is a higher mental function, which integrates many functional systems. Speech development in infants during the first months after birth is ensured by the innate hearing and emerging ability to fix the gaze on the face of an adult. Innate emotional reactions are also being developed during this period, turning into nonverbal forms of communication. At about 6 months a baby starts to pronounce some syllables; at 7–9 months – repeats various sounds combinations, pronounced by adults. At 10–11 months a baby begins to react on the words, referred to him/her. The first words usually appear at an age of 1 year; this is the start of the stage of active speech development. At this time it is acceptable, if a child confuses or rearranges sounds, distorts or misses them. By the age of 1.5 years a child begins to understand abstract explanations of adults. Significant vocabulary enlargement occurs between 2 and 3 years; grammatical structures of the language are being formed during this period (a child starts to use phrases and sentences. Preschool age (3–7 y. o. is characterized by incorrect, but steadily improving pronunciation of sounds and phonemic perception. The vocabulary increases; abstract speech and retelling are being formed. Children over 7 y. o. continue to improve grammar, writing and reading skills. The described stages may not have strict age boundaries, as soon as they are dependent not only on environment, but also on the child’s mental constitution, heredity and character.

  1. Nonlinear Statistical Modeling of Speech

    Science.gov (United States)

    Srinivasan, S.; Ma, T.; May, D.; Lazarou, G.; Picone, J.

    2009-12-01

    Contemporary approaches to speech and speaker recognition decompose the problem into four components: feature extraction, acoustic modeling, language modeling and search. Statistical signal processing is an integral part of each of these components, and Bayes Rule is used to merge these components into a single optimal choice. Acoustic models typically use hidden Markov models based on Gaussian mixture models for state output probabilities. This popular approach suffers from an inherent assumption of linearity in speech signal dynamics. Language models often employ a variety of maximum entropy techniques, but can employ many of the same statistical techniques used for acoustic models. In this paper, we focus on introducing nonlinear statistical models to the feature extraction and acoustic modeling problems as a first step towards speech and speaker recognition systems based on notions of chaos and strange attractors. Our goal in this work is to improve the generalization and robustness properties of a speech recognition system. Three nonlinear invariants are proposed for feature extraction: Lyapunov exponents, correlation fractal dimension, and correlation entropy. We demonstrate an 11% relative improvement on speech recorded under noise-free conditions, but show a comparable degradation occurs for mismatched training conditions on noisy speech. We conjecture that the degradation is due to difficulties in estimating invariants reliably from noisy data. To circumvent these problems, we introduce two dynamic models to the acoustic modeling problem: (1) a linear dynamic model (LDM) that uses a state space-like formulation to explicitly model the evolution of hidden states using an autoregressive process, and (2) a data-dependent mixture of autoregressive (MixAR) models. Results show that LDM and MixAR models can achieve comparable performance with HMM systems while using significantly fewer parameters. Currently we are developing Bayesian parameter estimation and

  2. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  3. Looking for Rhythm in Speech

    OpenAIRE

    Fred Cummins

    2012-01-01

    A brief review is provided of the study of rhythm in speech. Much of that activity has focused on looking for empirical measures that would support the categorization of languages into discrete rhythm ‘types’. That activity has had little success, and has used the term ‘rhythm’ in increasingly unmusical and unintuitive ways. Recent approaches to conversation that regard speech as a whole-body activity are found to provide considerations of rhythm that are closer to the central, musical, sense...

  4. Looking for Rhythm in Speech

    Directory of Open Access Journals (Sweden)

    Fred Cummins

    2012-09-01

    Full Text Available A brief review is provided of the study of rhythm in speech. Much of that activity has focused on looking for empirical measures that would support the categorization of languages into discrete rhythm ‘types’. That activity has had little success, and has used the term ‘rhythm’ in increasingly unmusical and unintuitive ways. Recent approaches to conversation that regard speech as a whole-body activity are found to provide considerations of rhythm that are closer to the central, musical, sense of the term.

  5. Discriminative learning for speech recognition

    CERN Document Server

    He, Xiadong

    2008-01-01

    In this book, we introduce the background and mainstream methods of probabilistic modeling and discriminative parameter optimization for speech recognition. The specific models treated in depth include the widely used exponential-family distributions and the hidden Markov model. A detailed study is presented on unifying the common objective functions for discriminative learning in speech recognition, namely maximum mutual information (MMI), minimum classification error, and minimum phone/word error. The unification is presented, with rigorous mathematical analysis, in a common rational-functio

  6. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    Science.gov (United States)

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  7. Musicians do not benefit from differences in fundamental frequency when listening to speech in competing speech backgrounds

    DEFF Research Database (Denmark)

    Madsen, Sara Miay Kim; Whiteford, Kelly L.; Oxenham, Andrew J.

    2017-01-01

    . Here we studied a relatively large (N=60) cohort of young adults, equally divided between nonmusicians and highly trained musicians, to test whether the musicians were better able to understand speech either in noise or in a two-talker competing speech masker. The target speech and competing speech...... that musical training leads to improved speech intelligibility in complex speech or noise backgrounds....

  8. Regulation of speech in multicultural societies: introduction

    NARCIS (Netherlands)

    Maussen, M.; Grillo, R.

    2014-01-01

    What to do about speech which vilifies or defames members of minorities on the grounds of their ethnic or religious identity or their sexuality? How to respond to such speech, which may directly or indirectly cause harm, while taking into account the principle of free speech, has been much debated

  9. Cognitive functions in Childhood Apraxia of Speech

    NARCIS (Netherlands)

    Nijland, L.; Terband, H.; Maassen, B.

    2015-01-01

    Purpose: Childhood Apraxia of Speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional

  10. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    The second driving force is the impetus being provided by both government and industry for technologies to help break down domestic and international language barriers, these also being barriers to the expansion of policy and commerce. Speech-to-speech and speech-to-text translation are thus emerging as key ...

  11. Epoch-based analysis of speech signals

    Indian Academy of Sciences (India)

    Epoch sequence is useful to manipulate prosody in speech synthesis applications. Accurate estimation of epochs helps in characterizing voice quality features. Epoch extraction also helps in speech enhancement and multispeaker separation. In this tutorial article, the importance of epochs for speech analysis is discussed, ...

  12. Acoustics of Clear Speech: Effect of Instruction

    Science.gov (United States)

    Lam, Jennifer; Tjaden, Kris; Wilding, Greg

    2012-01-01

    Purpose: This study investigated how different instructions for eliciting clear speech affected selected acoustic measures of speech. Method: Twelve speakers were audio-recorded reading 18 different sentences from the Assessment of Intelligibility of Dysarthric Speech (Yorkston & Beukelman, 1984). Sentences were produced in habitual, clear,…

  13. Interventions for Speech Sound Disorders in Children

    Science.gov (United States)

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  14. Cognitive Functions in Childhood Apraxia of Speech

    Science.gov (United States)

    Nijland, Lian; Terband, Hayo; Maassen, Ben

    2015-01-01

    Purpose: Childhood apraxia of speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional problems. Method: Cognitive functions were investigated…

  15. DEVELOPMENT AND DISORDERS OF SPEECH IN CHILDHOOD.

    Science.gov (United States)

    KARLIN, ISAAC W.; AND OTHERS

    THE GROWTH, DEVELOPMENT, AND ABNORMALITIES OF SPEECH IN CHILDHOOD ARE DESCRIBED IN THIS TEXT DESIGNED FOR PEDIATRICIANS, PSYCHOLOGISTS, EDUCATORS, MEDICAL STUDENTS, THERAPISTS, PATHOLOGISTS, AND PARENTS. THE NORMAL DEVELOPMENT OF SPEECH AND LANGUAGE IS DISCUSSED, INCLUDING THEORIES ON THE ORIGIN OF SPEECH IN MAN AND FACTORS INFLUENCING THE NORMAL…

  16. Development of binaural speech transmission index

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Drullman, R.

    2006-01-01

    Although the speech transmission index (STI) is a well-accepted and standardized method for objective prediction of speech intelligibility in a wide range of-environments and applications, it is essentially a monaural model. Advantages of binaural hearing to the intelligibility of speech are

  17. Speech and Debate as Civic Education

    Science.gov (United States)

    Hogan, J. Michael; Kurr, Jeffrey A.; Johnson, Jeremy D.; Bergmaier, Michael J.

    2016-01-01

    In light of the U.S. Senate's designation of March 15, 2016 as "National Speech and Debate Education Day" (S. Res. 398, 2016), it only seems fitting that "Communication Education" devote a special section to the role of speech and debate in civic education. Speech and debate have been at the heart of the communication…

  18. Speech Segmentation Using Bayesian Autoregressive Changepoint Detector

    Directory of Open Access Journals (Sweden)

    P. Sovka

    1998-12-01

    Full Text Available This submission is devoted to the study of the Bayesian autoregressive changepoint detector (BCD and its use for speech segmentation. Results of the detector application to autoregressive signals as well as to real speech are given. BCD basic properties are described and discussed. The novel two-step algorithm consisting of cepstral analysis and BCD for automatic speech segmentation is suggested.

  19. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  20. Production and Perception of Fast Speech

    NARCIS (Netherlands)

    Janse, E.

    2003-01-01

    This thesis reports on a series of experiments investigating how speakers produce and listeners perceive fast speech. The main research question is how the perception of naturally produced fast speech compares to the perception of artificially time-compressed speech. Research has shown that

  1. The interpersonal level in English: reported speech

    NARCIS (Netherlands)

    Keizer, E.

    2009-01-01

    The aim of this article is to describe and classify a number of different forms of English reported speech (or thought), and subsequently to analyze and represent them within the theory of FDG. First, the most prototypical forms of reported speech are discussed (direct and indirect speech);

  2. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  3. Neural bases of accented speech perception

    OpenAIRE

    Patti eAdank; Nuttall, Helen E.; Briony eBanks; Dan eKennedy-Higgins

    2015-01-01

    The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Adank, Evans, Stuart-Smith, & Scott, 2009; Floccia, Goslin, Girard, & Konopczynski, 2006). Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented...

  4. Speech-in-Speech Recognition: A Training Study

    Science.gov (United States)

    Van Engen, Kristin J.

    2012-01-01

    This study aims to identify aspects of speech-in-noise recognition that are susceptible to training, focusing on whether listeners can learn to adapt to target talkers ("tune in") and learn to better cope with various maskers ("tune out") after short-term training. Listeners received training on English sentence recognition in…

  5. Speech intelligibility of native and non-native speech

    NARCIS (Netherlands)

    Wijngaarden, S.J. van

    1999-01-01

    The intelligibility of speech is known to be lower if the talker is non-native instead of native for the given language. This study is aimed at quantifying the overall degradation due to acoustic-phonetic limitations of non-native talkers of Dutch, specifically of Dutch-speaking Americans who have

  6. An optimal speech processor for efficient human speech ...

    Indian Academy of Sciences (India)

    Abstract. The transmitter and the receiver in a communication system have to be designed optimally with respect to one another to ensure reliable and efficient com- munication. Following this principle, we derive an optimal filterbank for processing speech signal in the listener's auditory system (receiver), so that maximum ...

  7. Speech perception in children with speech output disorders.

    NARCIS (Netherlands)

    Nijland, L.

    2009-01-01

    Research in the field of speech production pathology is dominated by describing deficits in output. However, perceptual problems might underlie, precede, or interact with production disorders. The present study hypothesizes that the level of the production disorders is linked to level of perception

  8. Speech entrainment enables patients with Broca's aphasia to produce fluent speech.

    Science.gov (United States)

    Fridriksson, Julius; Hubbard, H Isabel; Hudspeth, Sarah Grace; Holland, Audrey L; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-12-01

    A distinguishing feature of Broca's aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect 'speech entrainment' and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca's aphasia. In Experiment 1, 13 patients with Broca's aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca's area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production. Behavioural and

  9. Listeners' preference for computer-synthesized speech over natural speech of people with disabilities.

    Science.gov (United States)

    Stern, Steven E; Chobany, Chelsea M; Patel, Disha V; Tressler, Justin J

    2014-08-01

    There are few controlled experimental studies that examine reactions to people with speech disabilities. We conducted 2 studies designed to examine participants' reactions to persuasive appeals delivered by people with physical disabilities and mild to moderate dysarthria. Research participants watched video clips delivered by actors with bona fide disabilities and subsequently rated the argument, message, and the speaker. The first study (n = 165) employed a between-groups design that examined reactions to natural dysarthric speech, synthetic speech as entered into a keyboard by hand, and synthetic speech as entered into a keyboard with a headwand. The second study (n = 27) employed a within-groups design that examined how participants reacted to natural dysarthric speech versus synthetic speech as entered into a keyboard by hand. Both of these studies provide evidence that people rated the argument, message, and speaker more favorably when people with disabilities used synthetic speech than when they spoke in their natural voice. The implications are that although people react negatively to computer-synthesized speech, they prefer it to and find it more persuasive than the speech of people with disabilities. This appears to be the case even if the speech is only moderately impaired and is as intelligible as the synthetic speech. Hence, the decision to use synthetic speech versus natural speech can be further complicated by an understanding that even the intelligible speech of people with disabilities leads to more negative reactions than synthetic speech.

  10. Effects of multi-channel compression time constants on subjectively perceived sound quality and speech intelligibility.

    Science.gov (United States)

    Hansen, Martin

    2002-08-01

    The purpose of this study was to determine the influence of the compression time constants in a multi-channel compression hearing aid on both subjectively assessed speech intelligibility and sound quality in realistic binaural acoustical situations for normal-hearing and hearing-impaired listeners. A nonlinear hearing aid with 15 independent compression channels of approximated critical bandwidth was simulated on a personal computer. Various everyday life situations containing different sounds such as speech and speech in noise were recorded binaurally through original hearing aid microphones placed in BTE hearing aid cases. Two experiments were run with normal hearing and hearing-impaired subjects. For each subject, hearing thresholds were established using in situ audiometry. The static I/O-curve parameters in all channels of the hearing aid were then adjusted so that normal speech received an insertion gain corresponding to the NAL-R formula (Byrne & Dillon, 1986). The compression ratio was kept constant at 2.1:1. In the first experiment with six normal-hearing and six hearing-impaired subjects, the hearing aid was programmed to four different settings by changing only the compression time constants while all the parameters describing the static nonlinear Input/Output-curve were kept constant. The compression threshold was set to a very low value. In the second experiment with seven normal-hearing and eight hearing-impaired subjects, the hearing aid was programmed to four settings by changing the release time constants and the compression threshold while all other remaining parameters were kept constant. Using a complete A/B pair comparison procedure, subjects were presented binaurally with the amplified sounds and asked to subjectively assess their preference for each hearing aid setting with regards to speech intelligibility and sound quality. In Experiment 1, all subjects showed a significant preference for the longest release time (4 sec) over the two

  11. Paraconsistent semantics of speech acts

    NARCIS (Netherlands)

    Dunin-Kȩplicz, Barbara; Strachocka, Alina; Szałas, Andrzej; Verbrugge, Rineke

    2015-01-01

    This paper discusses an implementation of four speech acts: assert, concede, request and challenge in a paraconsistent framework. A natural four-valued model of interaction yields multiple new cognitive situations. They are analyzed in the context of communicative relations, which partially replace

  12. The Ontogenesis of Speech Acts

    Science.gov (United States)

    Bruner, Jerome S.

    1975-01-01

    A speech act approach to the transition from pre-linguistic to linguistic communication is adopted in order to consider language in relation to behavior and to allow for an emphasis on the use, rather than the form, of language. A pilot study of mothers and infants is discussed. (Author/RM)

  13. IAS IN SPEECH ACT STUDIES

    African Journals Online (AJOL)

    they offer into the way in which cultural differences are encoded in speech act performance, has important implica- tions for first and second language teaching in linguistically and culturally diverse societies. A brief look at some of ..... or not to use slang, etc. The question, then, is whether and to what extent the rela- tionship.

  14. Aerosol Emission during Human Speech

    Science.gov (United States)

    Asadi, Sima; Ristenpart, William

    2016-11-01

    The traditional emphasis for airborne disease transmission has been on coughing and sneezing, which are dramatic expiratory events that yield easily visible droplets. Recent research suggests that normal speech can release even larger quantities of aerosols that are too small to see with the naked eye, but are nonetheless large enough to carry a variety of pathogens (e.g., influenza A). This observation raises an important question: what types of speech emit the most aerosols? Here we show that the concentration of aerosols emitted during healthy human speech is positively correlated with both the amplitude (loudness) and fundamental frequency (pitch) of the vocalization. Experimental measurements with an aerodynamic particle sizer (APS) indicate that speaking in a loud voice (95 decibels) yields up to fifty times more aerosols than in a quiet voice (75 decibels), and that sounds associated with certain phonemes (e.g., [a] or [o]) release more aerosols than others. We interpret these results in terms of the egressive airflow rate associated with each phoneme and the corresponding fundamental frequency, which is known to vary significantly with gender and age. The results suggest that individual speech patterns could affect the probability of airborne disease transmission.

  15. Going to a Speech Therapist

    Science.gov (United States)

    ... therapists help people of all ages with different speech and language disorders. Here are some of them: articulation (say: ar-tik-yuh-LAY-shun) disorders: This when a kid has trouble saying certain sounds or saying words correctly. "Run" might come out ...

  16. The DNA of prophetic speech

    African Journals Online (AJOL)

    2014-03-04

    Mar 4, 2014 ... Having to speak words that can potentially abuse the divine connotation of prophetic speech for giving authority to the own manipulative intent poses a daunting challenge to preachers. The metaphorical images triggered by 'DNA' and 'genetic engineering' are deployed in illustrating the ambivalent ...

  17. Gaucho Gazette: Speech and Sensationalism

    Directory of Open Access Journals (Sweden)

    Roberto José Ramos

    2013-07-01

    Full Text Available The Gaucho Gazette presents itself as a “popular newspaper”. Attempts to produce a denial about his aesthetic tabloid. Search only say that discloses what happens, as if the media were merely a reflection of society. This paper will seek to understand and explain your Sensationalism, through their speeches. Use for both, semiology, Roland Barthes, in their possibilities transdisciplinary.

  18. Gaucho Gazette: Speech and Sensationalism

    OpenAIRE

    Roberto José Ramos

    2013-01-01

    The Gaucho Gazette presents itself as a “popular newspaper”. Attempts to produce a denial about his aesthetic tabloid. Search only say that discloses what happens, as if the media were merely a reflection of society. This paper will seek to understand and explain your Sensationalism, through their speeches. Use for both, semiology, Roland Barthes, in their possibilities transdisciplinary.

  19. Impromptu Speech, Structure, and Process.

    Science.gov (United States)

    Enkvist, Nils Erik

    Impromptu speech can be defined in different ways: in terms of situational context, linguistic characteristics, and real-time processing. These approaches are not contradictory. There are certain situations that call for rapid processing of spoken discourse, and the needs of that processing are reflected in the structure of the text. The degree of…

  20. computer based speech signal processing

    African Journals Online (AJOL)

    An alternative tool for research in phonetics: computer based speech signal processing. EE Williams, RC Okoro, Z Lipcsey. Abstract. No abstract available. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/gjpas.v10i3.16424 · AJOL African ...

  1. Acoustic Analysis of PD Speech

    Directory of Open Access Journals (Sweden)

    Karen Chenausky

    2011-01-01

    Full Text Available According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD, with roughly another 50,000 receiving new diagnoses each year. 70%–90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS substantially relieves motor symptoms in advanced-stage patients for whom medication produces disabling dyskinesias. This study investigated speech changes as a result of DBS settings chosen to maximize motor performance. The speech of 10 PD patients and 12 normal controls was analyzed for syllable rate and variability, syllable length patterning, vowel fraction, voice-onset time variability, and spirantization. These were normalized by the controls' standard deviation to represent distance from normal and combined into a composite measure. Results show that DBS settings relieving motor symptoms can improve speech, making it up to three standard deviations closer to normal. However, the clinically motivated settings evaluated here show greater capacity to impair, rather than improve, speech. A feedback device developed from these findings could be useful to clinicians adjusting DBS parameters, as a means for ensuring they do not unwittingly choose DBS settings which impair patients' communication.

  2. "Free Speech" and "Political Correctness"

    Science.gov (United States)

    Scott, Peter

    2016-01-01

    "Free speech" and "political correctness" are best seen not as opposing principles, but as part of a spectrum. Rather than attempting to establish some absolute principles, this essay identifies four trends that impact on this debate: (1) there are, and always have been, legitimate debates about the--absolute--beneficence of…

  3. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Consequently, this asks for complex data-intensive machine-learning. 729 ..... environments (Houtgast & Steeneken 1973). Extensions involving ..... Institute of Science and Technology, Portland. Avendano C, Hermansky H 1997 On the properties of temporal processing for speech in adverse environments. Workshop on ...

  4. Fast Monaural Separation of Speech

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Dyrholm, Mads

    2003-01-01

    a Factorial Hidden Markov Model, with non-stationary assumptions on the source autocorrelations modelled through the Factorial Hidden Markov Model, leads to separation in the monaural case. By extending Hansens work we find that Roweis' assumptions are necessary for monaural speech separation. Furthermore we...

  5. An alternative strategy for universal infant hearing screening in tertiary hospitals with a high delivery rate, within a developing country, using transient evoked oto-acoustic emissions and brainstem evoked response audiometry.

    Science.gov (United States)

    Mathur, N N; Dhawan, R

    2007-07-01

    To formulate an alternative strategy for universal infants hearing screening in an Indian tertiary referral hospital with a high delivery rate, which could be extended to similar situations in other developing countries. The system should be able to diagnose, in a timely fashion, all infants with severe and profound hearing losses. One thousand newborn were randomly selected. All underwent testing with transient evoked oto-acoustic emissions (TEOAE) in the first 48 hours of life. All TEOAE failures were followed up and repeat tests were performed at three weeks, three months and six months of age. Infants with acceptable TEOAE results at any of the four ages were discharged from the study. Infants with unacceptable TEOAE results at all the four ages underwent brainstem evoked response audiometry and oto-endoscopy. The 'pass rate' for TEOAE testing was calculated for all four ages. The time taken to perform TEOAE and brainstem evoked response audiometry was recorded for all subjects. These recordings were statistically analysed to find the most suitable strategy for universal hearing screening in our hospital. The pass rate for TEOAE was 79.0 per cent at audiometry. Obstructed and collapsed external auditory canals were the two factors that significantly affected the specificity of TEOAE in infants results are generated, such that a larger number must undergo brainstem evoked response audiometry, wasting time and resources. This can easily be avoided by delaying TEOAE screening until three months of age, when it has a substantially lower false positive outcome. We expect that implementation of this alternative strategy in our hospital will maximise the benefits of such a programme.

  6. The motor theory of speech perception revisited.

    Science.gov (United States)

    Massaro, Dominic W; Chen, Trevor H

    2008-04-01

    Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.

  7. Perceived Speech Quality Estimation Using DTW Algorithm

    Directory of Open Access Journals (Sweden)

    S. Arsenovski

    2009-06-01

    Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.

  8. Predicting masking release of lateralized speech

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; MacDonald, Ewen; Dau, Torsten

    2016-01-01

    Locsei et al. (2015) [Speech in Noise Workshop, Copenhagen, 46] measured ˝ speech reception thresholds (SRTs) in anechoic conditions where the target speech and the maskers were lateralized using interaural time delays. The maskers were speech-shaped noise (SSN) and reversed babble with 2, 4, or 8....... The largest masking release (MR) was observed when all maskers were on the opposite side of the target. The data in the conditions containing only energetic masking and modulation masking could be accounted for using a binaural extension of the speech-based envelope power spectrum model [sEPSM; Jørgensen et...

  9. Text To Speech System for Telugu Language

    OpenAIRE

    M. Siva Kumar; E. Prakash Babu

    2014-01-01

    Telugu is one of the oldest languages in India. This paper describes the development of Telugu Text-to-Speech System (TTS).In Telugu TTS the input is Telugu text in Unicode. The voices are sampled from real recorded speech. The objective of a text to speech system is to convert an arbitrary text into its corresponding spoken waveform. Speech synthesis is a process of building machinery that can generate human-like speech from any text input to imitate human speakers. Text proc...

  10. Pattern recognition in speech and language processing

    CERN Document Server

    Chou, Wu

    2003-01-01

    Minimum Classification Error (MSE) Approach in Pattern Recognition, Wu ChouMinimum Bayes-Risk Methods in Automatic Speech Recognition, Vaibhava Goel and William ByrneA Decision Theoretic Formulation for Adaptive and Robust Automatic Speech Recognition, Qiang HuoSpeech Pattern Recognition Using Neural Networks, Shigeru KatagiriLarge Vocabulary Speech Recognition Based on Statistical Methods, Jean-Luc GauvainToward Spontaneous Speech Recognition and Understanding, Sadaoki FuruiSpeaker Authentication, Qi Li and Biing-Hwang JuangHMMs for Language Processing Problems, Ri

  11. The Prediction of Speech Recognition in Noise With a Semi-Implantable Bone Conduction Hearing System by External Bone Conduction Stimulation With Headband

    Directory of Open Access Journals (Sweden)

    Friedrich Ihler

    2016-09-01

    Full Text Available Semi-implantable transcutaneous bone conduction devices are treatment options for conductive and mixed hearing loss (CHL/MHL. For counseling of patients, realistic simulation of the functional result is desirable. This study compared speech recognition in noise with a semi-implantable transcutaneous bone conduction device to external stimulation with a bone conduction device fixed by a headband. Eight German-language adult patients were enrolled after a semi-implantable transcutaneous bone conduction device (Bonebridge, Med-El was implanted and fitted. Patients received a bone conduction device for external stimulation (Baha BP110, Cochlear fixed by a headband for comparison. The main outcome measure was speech recognition in noise (Oldenburg Sentence Test. Pure-tone audiometry was performed and subjective benefit was assessed using the Glasgow Benefit Inventory and Abbreviated Profile of Hearing Aid Benefit questionnaires. Unaided, patients showed a mean signal-to-noise ratio threshold of 4.6 ± 4.2 dB S/N for speech recognition. The aided results were −3.3 ± 7.2 dB S/N by external bone conduction stimulation and −1.2 ± 4.0 dB S/N by the semi-implantable bone conduction device. The difference between the two devices was not statistically significant, while the difference was significant between unaided and aided situation for both devices. Both questionnaires for subjective benefit favored the semi-implantable device over external stimulation. We conclude that it is possible to simulate the result of speech recognition in noise with a semi-implantable transcutaneous bone conduction device by external stimulation. This should be part of preoperative counseling of patients with CHL/MHL before implantation of a bone conduction device.

  12. The Prediction of Speech Recognition in Noise With a Semi-Implantable Bone Conduction Hearing System by External Bone Conduction Stimulation With Headband

    Science.gov (United States)

    Ihler, Friedrich; Blum, Jenny; Berger, Max-Ulrich; Weiss, Bernhard G.; Welz, Christian

    2016-01-01

    Semi-implantable transcutaneous bone conduction devices are treatment options for conductive and mixed hearing loss (CHL/MHL). For counseling of patients, realistic simulation of the functional result is desirable. This study compared speech recognition in noise with a semi-implantable transcutaneous bone conduction device to external stimulation with a bone conduction device fixed by a headband. Eight German-language adult patients were enrolled after a semi-implantable transcutaneous bone conduction device (Bonebridge, Med-El) was implanted and fitted. Patients received a bone conduction device for external stimulation (Baha BP110, Cochlear) fixed by a headband for comparison. The main outcome measure was speech recognition in noise (Oldenburg Sentence Test). Pure-tone audiometry was performed and subjective benefit was assessed using the Glasgow Benefit Inventory and Abbreviated Profile of Hearing Aid Benefit questionnaires. Unaided, patients showed a mean signal-to-noise ratio threshold of 4.6 ± 4.2 dB S/N for speech recognition. The aided results were −3.3 ± 7.2 dB S/N by external bone conduction stimulation and −1.2 ± 4.0 dB S/N by the semi-implantable bone conduction device. The difference between the two devices was not statistically significant, while the difference was significant between unaided and aided situation for both devices. Both questionnaires for subjective benefit favored the semi-implantable device over external stimulation. We conclude that it is possible to simulate the result of speech recognition in noise with a semi-implantable transcutaneous bone conduction device by external stimulation. This should be part of preoperative counseling of patients with CHL/MHL before implantation of a bone conduction device. PMID:27698259

  13. Screening protocols for the prevention of occupational noise-induced hearing loss: the role of conventional and extended high frequency audiometry may vary according to the years of employment.

    Science.gov (United States)

    Riga, Maria; Korres, George; Balatsouras, Dimitrios; Korres, Stavros

    2010-07-01

    Although occupational noise-induced hearing loss (NIHL) has become a major problem in industrialized societies, there is a notable lack of effective screening protocols to ensure its early diagnosis. The aim of this study was to detect a potential role of extended high frequency (EHF) audiometry in industrial hearing screening protocols. The population consisted of 151 persons, working for 8 hours daily in a noisy environment (90-110 dBA). The changes of hearing thresholds in industrial workers were analyzed, not only with respect to their age, as has been presented by previous studies, but also with respect to the duration of their previous employment. During the first 10 years of employment, the frequencies 12500, 14000 and 16000Hz were the only ones significantly affected. For the second decade of employment, thresholds were significantly elevated only at 2000 and 4000Hz. After exceeding 20 years of employment, the affected frequencies were 250, 500 and 1000Hz. The effects of age on hearing acuity were significant at all frequencies for the first 2 groups. EHF audiometry seems able to identify the first signs of NIHL, much earlier than conventional audiometry, and therefore may need to be implemented in the screening examinations especially of workers with less than 1 decade of employment. Hearing screening protocols could become more efficient by adjusting their frequency ranges according to the frequencies "at risk", which correspond to the duration of the workers' previous employment.

  14. Optimal Wavelets for Speech Signal Representations

    Directory of Open Access Journals (Sweden)

    Shonda L. Walker

    2003-08-01

    Full Text Available It is well known that in many speech processing applications, speech signals are characterized by their voiced and unvoiced components. Voiced speech components contain dense frequency spectrum with many harmonics. The periodic or semi-periodic nature of voiced signals lends itself to Fourier Processing. Unvoiced speech contains many high frequency components and thus resembles random noise. Several methods for voiced and unvoiced speech representations that utilize wavelet processing have been developed. These methods seek to improve the accuracy of wavelet-based speech signal representations using adaptive wavelet techniques, superwavelets, which uses a linear combination of adaptive wavelets, gaussian methods and a multi-resolution sinusoidal transform approach to mention a few. This paper addresses the relative performance of these wavelet methods and evaluates the usefulness of wavelet processing in speech signal representations. In addition, this paper will also address some of the hardware considerations for the wavelet methods presented.

  15. Speech Enhancement with Natural Sounding Residual Noise Based on Connected Time-Frequency Speech Presence Regions

    Directory of Open Access Journals (Sweden)

    Sørensen Karsten Vandborg

    2005-01-01

    Full Text Available We propose time-frequency domain methods for noise estimation and speech enhancement. A speech presence detection method is used to find connected time-frequency regions of speech presence. These regions are used by a noise estimation method and both the speech presence decisions and the noise estimate are used in the speech enhancement method. Different attenuation rules are applied to regions with and without speech presence to achieve enhanced speech with natural sounding attenuated background noise. The proposed speech enhancement method has a computational complexity, which makes it feasible for application in hearing aids. An informal listening test shows that the proposed speech enhancement method has significantly higher mean opinion scores than minimum mean-square error log-spectral amplitude (MMSE-LSA and decision-directed MMSE-LSA.

  16. Perception of Speech Sounds in School-Aged Children with Speech Sound Disorders.

    Science.gov (United States)

    Preston, Jonathan L; Irwin, Julia R; Turcios, Jacqueline

    2015-11-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System, which has been effectively used to assess preschoolers' ability to perform goodness judgments, is explored for school-aged children with residual speech errors (RSEs). However, data suggest that this particular task may not be sensitive to perceptual differences in school-aged children. The need for the development of clinical tools for assessment of speech perception in school-aged children with RSE is highlighted, and clinical suggestions are provided. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... this issue, Tuomainen et al. (2005) used sine-wave speech stimuli created from three time-varying sine waves tracking the formants of a natural speech signal. Naïve observers tend not to recognize sine wave speech as speech but become able to decode its phonetic content when informed of the speech...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...

  18. Apraxia of speech: how reliable are speech and language therapists' diagnoses?

    Science.gov (United States)

    Mumby, Katharyn; Bowen, Audrey; Hesketh, Anne

    2007-08-01

    To discover how reliably speech and language therapists could diagnose apraxia of speech using their clinical judgement, by measuring whether they were consistent (intra-rater reliability), and whether their diagnoses agreed (inter-rater reliability). Video clips of people with communication difficulties following stroke were rated by four speech and language therapists who were given no definition of apraxia of speech, no training, and no opportunity for conferring. Videos were made of people following stroke in their homes. Ratings of the videos were carried out in the university lab under controlled conditions. Forty-two people with communication difficulties such as aphasia, apraxia of speech and dysarthria took part, and four specialist speech and language therapists acted as raters. Speech and language therapists' ratings of the presence and severity of apraxia of speech using videos. Intra-rater reliability was high for diagnosing (1) the presence of apraxia of speech (Cohen's kappas ranging from 0.90 to 1.00; 0.93 overall), and (2) the severity of apraxia of speech (kappa 0.84 to 0.92; 0.90 overall). The inter-rater reliability was also high for both the presence of apraxia of speech (kappa 0.86) and severity of apraxia of speech (0.74). Despite controversy over its nature and existence, specialist speech and language therapists show high levels of agreement on the diagnosis of apraxia of speech using their clinical judgement.

  19. Commencement Speech as a Hybrid Polydiscursive Practice

    Directory of Open Access Journals (Sweden)

    Светлана Викторовна Иванова

    2017-12-01

    Full Text Available Discourse and media communication researchers pay attention to the fact that popular discursive and communicative practices have a tendency to hybridization and convergence. Discourse which is understood as language in use is flexible. Consequently, it turns out that one and the same text can represent several types of discourses. A vivid example of this tendency is revealed in American commencement speech / commencement address / graduation speech. A commencement speech is a speech university graduates are addressed with which in compliance with the modern trend is delivered by outstanding media personalities (politicians, athletes, actors, etc.. The objective of this study is to define the specificity of the realization of polydiscursive practices within commencement speech. The research involves discursive, contextual, stylistic and definitive analyses. Methodologically the study is based on the discourse analysis theory, in particular the notion of a discursive practice as a verbalized social practice makes up the conceptual basis of the research. This research draws upon a hundred commencement speeches delivered by prominent representatives of American society since 1980s till now. In brief, commencement speech belongs to institutional discourse public speech embodies. Commencement speech institutional parameters are well represented in speeches delivered by people in power like American and university presidents. Nevertheless, as the results of the research indicate commencement speech institutional character is not its only feature. Conceptual information analysis enables to refer commencement speech to didactic discourse as it is aimed at teaching university graduates how to deal with challenges life is rich in. Discursive practices of personal discourse are also actively integrated into the commencement speech discourse. More than that, existential discursive practices also find their way into the discourse under study. Commencement

  20. The most frequent speech issue in second age period

    OpenAIRE

    Kocmur, Katja

    2012-01-01

    I researched speech defects in preschool children, because today more and more people experience speech defects. I wanted to find out, if there are many children with speech disorder in the kindergarten. The theoretical part of the thesis presents the connection between thinking and speaking. It particularly presents the child’s speech development and necessary conditions for speech development. Described are factors, which impact the development of speech, and phases of speech development an...

  1. A hardware preprocessor for use in speech recognition: Speech Input Device SID3

    Science.gov (United States)

    Renger, R. E.; Manning, D. R.

    1983-05-01

    A device which reduces the amount of data sent to the computer for speech recognition, by extracting from the speech signal the information that conveys the meaning of the speech, all other data being discarded is presented. The design includes signal to noise ratios as low as 10 dB, public telephone frequency bandwidth and unconstrained speech. It produces continuously at its output 64 bits of digital information, which represents the way 16 speech parameters vary. The parameters cover speech quality, voice pitch, resonant frequency, level of resonance and unvoiced spectrum color. The receiving computer must have supporting software containing recognition algorithms adapted to SID3 parameters.

  2. Preschool teachers, speech therapists and parents: cooperation in recognition and treatment of preschool children speech disorders

    OpenAIRE

    Praček, Tamara

    2016-01-01

    The words by Ivo Škarić (2005) that we give ourselves to the world by using speech is a good reason to dwell upon what needs to be done to make speech the best possible (fluent, intelligible…). There are various factors that influence the development of child's speech (physiological, sociological) A newly born child has certain speech predispositions (brain centres, speech organs) which cannot be developed unless a chiald hears the speech. When born, a chiald »enters« his family's environment...

  3. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...... on this model. The basic model used in this thesis is the harmonic model which is a commonly used model for describing the voiced part of the speech signal. We show that it can be beneficial to extend the model to take inharmonicities or the non-stationarity of speech into account. Extending the model...

  4. Hearing impairment in children with congenital cytomegalovirus (CMV) infection based on distortion product otoacoustic emissions (DPOAE) and brain evoked response audiometry stimulus click (BERA Click) examinations

    Science.gov (United States)

    Airlangga, T. J.; Mangunatmadja, I.; Prihartono, J.; Zizlavsky, S.

    2017-08-01

    Congenital cytomegalovirus (congenital CMV) infection is a leading factor of nongenetic sensorineural hearing loss in children. Hearing loss caused by CMV infection does not have a pathognomonic configuration hence further research is needed. The development of knowledge on hearing loss caused by congenital CMV infection is progressing in many countries. Due to a lack of research in the context of Indonesia, this study assesses hearing impairment in children with congenital CMV infection in Indonesia, more specifically in the Cipto Mangunkusumo Hospital. Our objective was to profile hearing impairment in children 0-5 years of age with congenital CMV infection using Distortion Product Otoacoustic Emissions (DPOAE) and Brain Evoked Response Audiometry Stimulus Click (BERA Click) examinations. This cross-sectional study was conducted in the Cipto Mangunkusum Hospital from November, 2015 to May 2016 with 27 children 0-5 years of age with congenital CMV infection. Of individual ears studied, 58.0% exhibited sensorineural hearing loss. There was a significant relationship between developmental delay and incidence of sensorineural hearing loss. Subjects with a developmental delay were 6.57 times more likely (CI 95%; 1.88-22.87) to experience sensorineural hearing loss. Congenital CMV infection has an important role in causing sensorineural hearing loss in children.

  5. Three-year experience with the Sophono in children with congenital conductive unilateral hearing loss: tolerability, audiometry, and sound localization compared to a bone-anchored hearing aid.

    Science.gov (United States)

    Nelissen, Rik C; Agterberg, Martijn J H; Hol, Myrthe K S; Snik, Ad F M

    2016-10-01

    Bone conduction devices (BCDs) are advocated as an amplification option for patients with congenital conductive unilateral hearing loss (UHL), while other treatment options could also be considered. The current study compared a transcutaneous BCD (Sophono) with a percutaneous BCD (bone-anchored hearing aid, BAHA) in 12 children with congenital conductive UHL. Tolerability, audiometry, and sound localization abilities with both types of BCD were studied retrospectively. The mean follow-up was 3.6 years for the Sophono users (n = 6) and 4.7 years for the BAHA users (n = 6). In each group, two patients had stopped using their BCD. Tolerability was favorable for the Sophono. Aided thresholds with the Sophono were unsatisfactory, as they did not reach under a mean pure tone average of 30 dB HL. Sound localization generally improved with both the Sophono and the BAHA, although localization abilities did not reach the level of normal hearing children. These findings, together with previously reported outcomes, are important to take into account when counseling patients and their caretakers. The selection of a suitable amplification option should always be made deliberately and on individual basis for each patient in this diverse group of children with congenital conductive UHL.

  6. Speech-in-speech perception and executive function involvement.

    Science.gov (United States)

    Perrone-Bertolotti, Marcela; Tassin, Maxime; Meunier, Fanny

    2017-01-01

    This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation). The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost) and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities.

  7. [Incomprehensible speech in children. Childhood apraxia of speech].

    Science.gov (United States)

    Meyer, S; Kühn, D; Ptok, M

    2012-05-01

    Some children referred to ENT physicians suffer from severe and seemingly therapy-resistant impairment of the articulation of speech. Apart from classical symptoms of specific language impairment (SLI), such as a delay in the acquisition of syntax or poor lexical competence, these children's speech is sometimes practically incomprehensible. Describing the disorder as SLI although not correct would nevertheless be inappropriate. The term childhood apraxia of speech (CAS) has been coined for such impairment. In this article the background, symptoms, diagnostics and therapy of CAS are reviewed. For this systematic review a selective literature search in PubMed was conducted. The etiology of CAS is not well known and genetic factors, neurological diseases and metabolic imbalances are assumed. Symptoms differ significantly among individuals as well as intraindividually. CAS is defined as impairment in planning and controlling articulatory movements, which has a severe impact on the sound production. For ENT specialists it is important to be aware that CAS symptoms may lead to a severe impediment of verbal communication and subsequently also interfere with the normal socio-emotional development of an affected child. Thus, an intensive therapy regimen is mandatory. Studies with a high level of evidence concerning the sensitivity and specificity of diagnostic tools, as well as studies regarding the effectiveness and efficiency of therapeutic approaches are needed.

  8. Individual differneces in degraded speech perception

    Science.gov (United States)

    Carbonell, Kathy M.

    One of the lasting concerns in audiology is the unexplained individual differences in speech perception performance even for individuals with similar audiograms. One proposal is that there are cognitive/perceptual individual differences underlying this vulnerability and that these differences are present in normal hearing (NH) individuals but do not reveal themselves in studies that use clear speech produced in quiet (because of a ceiling effect). However, previous studies have failed to uncover cognitive/perceptual variables that explain much of the variance in NH performance on more challenging degraded speech tasks. This lack of strong correlations may be due to either examining the wrong measures (e.g., working memory capacity) or to there being no reliable differences in degraded speech performance in NH listeners (i.e., variability in performance is due to measurement noise). The proposed project has 3 aims; the first, is to establish whether there are reliable individual differences in degraded speech performance for NH listeners that are sustained both across degradation types (speech in noise, compressed speech, noise-vocoded speech) and across multiple testing sessions. The second aim is to establish whether there are reliable differences in NH listeners' ability to adapt their phonetic categories based on short-term statistics both across tasks and across sessions; and finally, to determine whether performance on degraded speech perception tasks are correlated with performance on phonetic adaptability tasks, thus establishing a possible explanatory variable for individual differences in speech perception for NH and hearing impaired listeners.

  9. Some articulatory details of emotional speech

    Science.gov (United States)

    Lee, Sungbok; Yildirim, Serdar; Bulut, Murtaza; Kazemzadeh, Abe; Narayanan, Shrikanth

    2005-09-01

    Differences in speech articulation among four emotion types, neutral, anger, sadness, and happiness are investigated by analyzing tongue tip, jaw, and lip movement data collected from one male and one female speaker of American English. The data were collected using an electromagnetic articulography (EMA) system while subjects produce simulated emotional speech. Pitch, root-mean-square (rms) energy and the first three formants were estimated for vowel segments. For both speakers, angry speech exhibited the largest rms energy and largest articulatory activity in terms of displacement range and movement speed. Happy speech is characterized by largest pitch variability. It has higher rms energy than neutral speech but articulatory activity is rather comparable to, or less than, neutral speech. That is, happy speech is more prominent in voicing activity than in articulation. Sad speech exhibits longest sentence duration and lower rms energy. However, its articulatory activity is no less than neutral speech. Interestingly, for the male speaker, articulation for vowels in sad speech is consistently more peripheral (i.e., more forwarded displacements) when compared to other emotions. However, this does not hold for female subject. These and other results will be discussed in detail with associated acoustics and perceived emotional qualities. [Work supported by NIH.

  10. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  11. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  12. Relative Salience of Speech Rhythm and Speech Rate on Perceived Foreign Accent in a Second Language.

    Science.gov (United States)

    Polyanskaya, Leona; Ordin, Mikhail; Busa, Maria Grazia

    2017-09-01

    We investigated the independent contribution of speech rate and speech rhythm to perceived foreign accent. To address this issue we used a resynthesis technique that allows neutralizing segmental and tonal idiosyncrasies between identical sentences produced by French learners of English at different proficiency levels and maintaining the idiosyncrasies pertaining to prosodic timing patterns. We created stimuli that (1) preserved the idiosyncrasies in speech rhythm while controlling for the differences in speech rate between the utterances; (2) preserved the idiosyncrasies in speech rate while controlling for the differences in speech rhythm between the utterances; and (3) preserved the idiosyncrasies both in speech rate and speech rhythm. All the stimuli were created in intoned (with imposed intonational contour) and flat (with monotonized, constant F0) conditions. The original and the resynthesized sentences were rated by native speakers of English for degree of foreign accent. We found that both speech rate and speech rhythm influence the degree of perceived foreign accent, but the effect of speech rhythm is larger than that of speech rate. We also found that intonation enhances the perception of fine differences in rhythmic patterns but reduces the perceptual salience of fine differences in speech rate.

  13. Apraxia of speech: an overview.

    Science.gov (United States)

    Ogar, Jennifer; Slama, Hilary; Dronkers, Nina; Amici, Serena; Gorno-Tempini, Maria Luisa

    2005-12-01

    Apraxia of speech (AOS) is a motor speech disorder that can occur in the absence of aphasia or dysarthria. AOS has been the subject of some controversy since the disorder was first named and described by Darley and his Mayo Clinic colleagues in the 1960s. A recent revival of interest in AOS is due in part to the fact that it is often the first symptom of neurodegenerative diseases, such as primary progressive aphasia and corticobasal degeneration. This article will provide a brief review of terminology associated with AOS, its clinical hallmarks and neuroanatomical correlates. Current models of motor programming will also be addressed as they relate to AOS and finally, typical treatment strategies used in rehabilitating the articulation and prosody deficits associated with AOS will be summarized.

  14. Dynamical quantification of schizophrenic speech.

    Science.gov (United States)

    Leroy, Fabrice; Pezard, Laurent; Nandrino, Jean-Louis; Beaune, Daniel

    2005-02-28

    Schizophrenic speech has been studied both at the clinical and linguistic level. Nevertheless, the statistical methods used in these studies do not specifically take into account the dynamical aspects of language. In the present study, we quantify the dynamical properties of linguistic production in schizophrenic and control subjects. Subjects' recall of a short story was encoded according to the succession of macro- and micro-propositions, and symbolic dynamical methods were used to analyze these data. Our results show the presence of a significant temporal organization in subjects' speech. Taking this structure into account, we show that schizophrenics connect micro-propositions significantly more often than controls. This impairment in accessing language at the highest level supports the hypothesis of a deficit in maintaining a discourse plan in schizophrenia.

  15. THE BASIS FOR SPEECH PREVENTION

    Directory of Open Access Journals (Sweden)

    Jordan JORDANOVSKI

    1997-06-01

    Full Text Available The speech is a tool for accurate communication of ideas. When we talk about speech prevention as a practical realization of the language, we are referring to the fact that it should be comprised of the elements of the criteria as viewed from the perspective of the standards. This criteria, in the broad sense of the word, presupposes an exact realization of the thought expressed between the speaker and the recipient.The absence of this criterion catches the eye through the practical realization of the language and brings forth consequences, often hidden very deeply in the human psyche. Their outer manifestation already represents a delayed reaction of the social environment. The foundation for overcoming and standardization of this phenomenon must be the anatomy-physiological patterns of the body, accomplished through methods in concordance with the nature of the body.

  16. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    A limitation in many source separation tasks is that the number of source signals has to be known in advance. Further, in order to achieve good performance, the number of sources cannot exceed the number of sensors. In many real-world applications these limitations are too restrictive. We propose...... a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...... techniques with binary time-frequency masking. In the proposed method, the number of source signals is not assumed to be known in advance and the number of sources is not limited to the number of microphones. Our approach needs only two microphones and the separated sounds are maintained as stereo signals....

  17. From Speech Acts to Semantics

    Directory of Open Access Journals (Sweden)

    Mackenzie Jim

    2014-03-01

    Full Text Available Frege introduced the notion of pragmatic force as what distinguishes statements from questions. This distinction was elaborated by Wittgenstein in his later works, and systematised as an account of different kinds of speech acts in formal dialogue theory by Hamblin. It lies at the heart of the inferential semantics more recently developed by Brandom. The present paper attempts to sketch some of the relations between these developments.

  18. Network Speech Systems Technology Program.

    Science.gov (United States)

    1979-09-30

    The simple scenario described above will work as long as intermediate nodes can find the required number of free outgoing slots to accommodate newly...buffer the number of packets that 17 CONTROL PREDICTED STREAM 1 TALKER- ACTIXIS CPCT REQUEST ;ATT REQUEST PD’ICTOR GENERATOR TRNMITTED - - SPECH AND...participant’s speech only when the channel is observed to be free . Because of the delay in the satellite transmission, there is a period of time between the

  19. Genetics Home Reference: FOXP2-related speech and language disorder

    Science.gov (United States)

    ... speech and language disorder FOXP2-related speech and language disorder Printable PDF Open All Close All Enable Javascript ... Children's Hospital Medical Center: Childhood Apraxia of Speech Disease InfoSearch: Speech-language disorder 1 MalaCards: childhood apraxia of speech Orphanet: ...

  20. Design and realisation of an audiovisual speech activity detector

    NARCIS (Netherlands)

    Van Bree, K.C.

    2006-01-01

    For many speech telecommunication technologies a robust speech activity detector is important. An audio-only speech detector will givefalse positives when the interfering signal is speech or has speech characteristics. The modality video is suitable to solve this problem. In this report the approach

  1. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Integration of speech signals from ear and eye is a well-known feature of speech perception. This is evidenced by the McGurk illusion in which visual speech alters auditory speech perception and by the advantage observed in auditory speech detection when a visual signal is present. Here we invest...

  2. Speech Prosody in Persian Language

    Directory of Open Access Journals (Sweden)

    Maryam Nikravesh

    2014-05-01

    Full Text Available Background: In verbal communication in addition of semantic and grammatical aspects, includes: vocabulary, syntax and phoneme, some special voice characteristics were use that called speech prosody. Speech prosody is one of the important factors of communication which includes: intonation, duration, pitch, loudness, stress, rhythm and etc. The aim of this survey is studying some factors of prosody as duration, fundamental frequency range and intonation contour. Materials and Methods: This study is performed with cross-sectional and descriptive-analytic approach. The participants include 134 male and female between 18-30 years old who normally speak Persian. Two sentences include: an interrogative and one declarative sentence were studied. Voice samples were analyzed by Dr. Speech software (real analysis software and data were analyzed by statistical test of unilateral variance analysis and in depended T test, and intonation contour was drawn for sentences. Results: Mean of duration between kinds of sentences had a significant difference. Mean of duration had significant difference between female and male. Fundamental frequency range between kinds of sentences had not significant difference. Fundamental frequency range in female is higher than male. Conclusion: Duration is an affective factor in Persian prosody. The higher fundamental frequency range in female is because of different anatomical and physiological mechanisms in phonation system. In addition higher fundamental frequency range in female is the result of an authority of language use in Farsi female. The end part of intonation contour in yes/no question is rising, in declarative sentence is falling.

  3. Clear Speech - Mere Speech? How segmental and prosodic speech reduction shape the impression that speakers create on listeners

    DEFF Research Database (Denmark)

    Niebuhr, Oliver

    2017-01-01

    and prosodic reduction levels (unreduced, moderately reduced, strongly reduced) are appropriately described by 13 physical, social, and cognitive attributes. The experiment shows that clear speech is not mere speech, and less clear speech is not just reduced either. Rather, results revealed a complex interplay...... of reduction levels and perceived speaker attributes in which moderate reduction can make a better impression on listeners than no reduction. In addition to its relevance in reduction models and theories, this interplay is instructive for various fields of speech application from social robotics to charisma...

  4. Effect of speech rate variation on acoustic phone stability in Afrikaans speech recognition

    CSIR Research Space (South Africa)

    Badenhorst, JAC

    2007-11-01

    Full Text Available The authors analyse the effect of speech rate variation on Afrikaans phone stability from an acoustic perspective. Specifically they introduce two techniques for the acoustic analysis of speech rate variation, apply these techniques to an Afrikaans...

  5. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: Introduction

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The goal of this article is to introduce the pause marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech (CAS) from speech delay.

  6. Speech Intelligibility Evaluation for Mobile Phones

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Cubick, Jens; Dau, Torsten

    2015-01-01

    In the development process of modern telecommunication systems, such as mobile phones, it is common practice to use computer models to objectively evaluate the transmission quality of the system, instead of time-consuming perceptual listening tests. Such models have typically focused on the quality...... of the transmitted speech, while little or no attention has been provided to speech intelligibility. The present study investigated to what extent three state-of-the art speech intelligibility models could predict the intelligibility of noisy speech transmitted through mobile phones. Sentences from the Danish...... Dantale II speech material were mixed with three different kinds of background noise, transmitted through three different mobile phones, and recorded at the receiver via a local network simulator. The speech intelligibility of the transmitted sentences was assessed by six normal-hearing listeners...

  7. Linear Predictive Coding for Speech Compression

    Directory of Open Access Journals (Sweden)

    Yasir Saleem

    2014-04-01

    Full Text Available Telecommunication industry is growing and different services are rapidly introduced by different competitors to attract the users. Speech communication and its quality conservation is the most prevalent and common service provided by almost all companies. The objective of this project is the development of a LPC (Linear Predictive Coding based voice coder. Attributes for speech like pitch, voiced and unvoiced decision and silence were extracted and speech was modeled using LDR (Levinson Durbin Recursion and SDA (Steepest Descent Algorithm. LPC filter is analyzed and its model is implemented. LPC's different attributes complexity, delay and bitrate are deliberated and tradeoffs are highlighted. The results were analyzed and quality of speech was determined using spectrograph and by listening to the synthesized speech. At the end quality of original and synthesized speech is discussed and shown graphically and a soft comparison between both above mentioned technique is also added.

  8. Acquirement and enhancement of remote speech signals

    Science.gov (United States)

    Lü, Tao; Guo, Jin; Zhang, He-yong; Yan, Chun-hui; Wang, Can-jin

    2017-07-01

    To address the challenges of non-cooperative and remote acoustic detection, an all-fiber laser Doppler vibrometer (LDV) is established. The all-fiber LDV system can offer the advantages of smaller size, lightweight design and robust structure, hence it is a better fit for remote speech detection. In order to improve the performance and the efficiency of LDV for long-range hearing, the speech enhancement technology based on optimally modified log-spectral amplitude (OM-LSA) algorithm is used. The experimental results show that the comprehensible speech signals within the range of 150 m can be obtained by the proposed LDV. The signal-to-noise ratio ( SNR) and mean opinion score ( MOS) of the LDV speech signal can be increased by 100% and 27%, respectively, by using the speech enhancement technology. This all-fiber LDV, which combines the speech enhancement technology, can meet the practical demand in engineering.

  9. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  10. Primary progressive aphasia and apraxia of speech.

    Science.gov (United States)

    Jung, Youngsin; Duffy, Joseph R; Josephs, Keith A

    2013-09-01

    Primary progressive aphasia is a neurodegenerative syndrome characterized by progressive language dysfunction. The majority of primary progressive aphasia cases can be classified into three subtypes: nonfluent/agrammatic, semantic, and logopenic variants. Each variant presents with unique clinical features, and is associated with distinctive underlying pathology and neuroimaging findings. Unlike primary progressive aphasia, apraxia of speech is a disorder that involves inaccurate production of sounds secondary to impaired planning or programming of speech movements. Primary progressive apraxia of speech is a neurodegenerative form of apraxia of speech, and it should be distinguished from primary progressive aphasia given its discrete clinicopathological presentation. Recently, there have been substantial advances in our understanding of these speech and language disorders. The clinical, neuroimaging, and histopathological features of primary progressive aphasia and apraxia of speech are reviewed in this article. The distinctions among these disorders for accurate diagnosis are increasingly important from a prognostic and therapeutic standpoint. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  11. Recent advances in nonlinear speech processing

    CERN Document Server

    Faundez-Zanuy, Marcos; Esposito, Antonietta; Cordasco, Gennaro; Drugman, Thomas; Solé-Casals, Jordi; Morabito, Francesco

    2016-01-01

    This book presents recent advances in nonlinear speech processing beyond nonlinear techniques. It shows that it exploits heuristic and psychological models of human interaction in order to succeed in the implementations of socially believable VUIs and applications for human health and psychological support. The book takes into account the multifunctional role of speech and what is “outside of the box” (see Björn Schuller’s foreword). To this aim, the book is organized in 6 sections, each collecting a small number of short chapters reporting advances “inside” and “outside” themes related to nonlinear speech research. The themes emphasize theoretical and practical issues for modelling socially believable speech interfaces, ranging from efforts to capture the nature of sound changes in linguistic contexts and the timing nature of speech; labors to identify and detect speech features that help in the diagnosis of psychological and neuronal disease, attempts to improve the effectiveness and performa...

  12. Giving speech a hand: gesture modulates activity in auditory cortex during speech perception.

    Science.gov (United States)

    Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella

    2009-03-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.

  13. Giving Speech a Hand: Gesture Modulates Activity in Auditory Cortex During Speech Perception

    OpenAIRE

    Hubbard, Amy L; Wilson, Stephen M.; Callan, Daniel E; Dapretto, Mirella

    2009-01-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neu...

  14. Speech masking speech in everyday communication : The role of inhibitory control and working memory capacity

    OpenAIRE

    Stenbäck, Victoria

    2016-01-01

    Age affects hearing and cognitive abilities. Older people, with and without hearing impairment (HI), exhibit difficulties in hearing speech in noise. Elderly individuals show greater difficulty in segregating target speech from distracting background noise, especially if the noise is competing speech with meaningful contents, so called informational maskers. Working memory capacity (WMC) has proven to be a crucial factor in comprehending speech in noise, especially for people with hearing los...

  15. Multisensory and sensorimotor interactions in speech perception

    OpenAIRE

    Kaisa eTiippana; Riikka eMöttönen; Jean-Luc eSchwartz

    2015-01-01

    International audience; This research topic presents speech as a natural, well-learned, multisensory communication signal, processed by multiple mechanisms. Reflecting the general status of the field, most articles focus on audiovisual speech perception and many utilize the McGurk effect, which arises when discrepant visual and auditory speech stimuli are presented (McGurk and MacDonald, 1976). Tiippana (2014) argues that the McGurk effect can be used as a proxy for multisensory integration p...

  16. Investigating Holistic Measures of Speech Prosody

    Science.gov (United States)

    Cunningham, Dana Aliel

    2012-01-01

    Speech prosody is a multi-faceted dimension of speech which can be measured and analyzed in a variety of ways. In this study, the speech prosody of Mandarin L1 speakers, English L2 speakers, and English L1 speakers was assessed by trained raters who listened to sound clips of the speakers responding to a graph prompt and reading a short passage.…

  17. Multilingualism and acquired neurogenic speech disorders

    OpenAIRE

    Ball, Martin J.

    2015-01-01

    Acquired neurogenic communication disorders can affect language, speech, or both. Although neurogenic speech disorders have been researched for a considerable time, much of this work has been restricted to a few languages (mainly English, with German, French, Japanese and Chinese also represented). Further, the work has concentrated on monolingual speakers. In this account, I aim to outline the main acquired speech disorders, and give examples of research into multilingual aspects of this top...

  18. PERSON DEIXIS IN USA PRESIDENTIAL CAMPAIGN SPEECHES

    OpenAIRE

    Nanda Anggarani Putri; Eri Kurniawan

    2015-01-01

    This study investigates the use of person deixis in presidential campaign speeches. This study is important because the use of person deixis in political speeches has been proved by many studies to give significant effects to the audience. The study largely employs a descriptive qualitative method. However, it also employs a simple quantitative method in calculating the number of personal pronouns used in the speeches and their percentages. The data for the study were collected from the trans...

  19. Music and speech prosody: A common rhythm

    OpenAIRE

    Maija eHausen; Ritva eTorppa; Salmela, Viljami R.; Martti eVainio; Teppo eSärkämö

    2013-01-01

    Disorders of music and speech perception, known as amusia and aphasia, have traditionally been regarded as dissociated deficits based on studies of brain damaged patients. This has been taken as evidence that music and speech are perceived by largely separate and independent networks in the brain. However, recent studies of congenital amusia have broadened this view by showing that the deficit is associated with problems in perceiving speech prosody, especially intonation and emotional prosod...

  20. Music and speech prosody: a common rhythm

    OpenAIRE

    Hausen, Maija; Torppa, Ritva; Salmela, Viljami R.; Vainio, Martti; Särkämö, Teppo

    2013-01-01

    Disorders of music and speech perception, known as amusia and aphasia, have traditionally been regarded as dissociated deficits based on studies of brain damaged patients. This has been taken as evidence that music and speech are perceived by largely separate and independent networks in the brain. However, recent studies of congenital amusia have broadened this view by showing that the deficit is associated with problems in perceiving speech prosody, especially intonation and emotional prosod...

  1. Speech Recognition by Human and Machine

    OpenAIRE

    Gulaker, Vegard

    2010-01-01

    Several feature extraction techniques, algorithms and toolkits are researched to investigate how speech recognition is performed. Spectrograms were found to be the simplest feature extraction techniques for visual representation of speech, and are explored and experimented with to see how phonemes are recognized. Hidden Markov models were found to be the best algorithms used for speech recognition. Hidden Markov model toolkit and Center for Spoken Language Understanding Toolkit, which are ...

  2. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech From Speech Delay: III. Theoretical Coherence of the Pause Marker with Speech Processing Deficits in Childhood Apraxia of Speech

    Science.gov (United States)

    Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. Method PM and other scores were obtained for 264 participants in 6 groups: CAS in idiopathic, neurogenetic, and complex neurodevelopmental disorders; adult-onset apraxia of speech (AAS) consequent to stroke and primary progressive apraxia of speech; and idiopathic speech delay. Results Participants with CAS and AAS had significantly lower scores than typically speaking reference participants and speech delay controls on measures posited to assess representational and transcoding processes. Representational deficits differed between CAS and AAS groups, with support for both underspecified linguistic representations and memory/access deficits in CAS, but for only the latter in AAS. CAS–AAS similarities in the age–sex standardized percentages of occurrence of the most frequent type of inappropriate pauses (abrupt) and significant differences in the standardized occurrence of appropriate pauses were consistent with speech processing findings. Conclusions Results support the hypotheses of core representational and transcoding speech processing deficits in CAS and theoretical coherence of the PM's pause-speech elements with these deficits. PMID:28384751

  3. Spotlight on Speech Codes 2007: The State of Free Speech on Our Nation's Campuses

    Science.gov (United States)

    Foundation for Individual Rights in Education (NJ1), 2007

    2007-01-01

    Last year, the Foundation for Individual Rights in Education (FIRE) conducted its first-ever comprehensive study of restrictions on speech at America's colleges and universities, "Spotlight on Speech Codes 2006: The State of Free Speech on our Nation's Campuses." In light of the essentiality of free expression to a truly liberal…

  4. Developmental apraxia of speech in children : quantitative assessment of speech characteristics

    NARCIS (Netherlands)

    Thoonen, G.H.J.

    1998-01-01

    Developmental apraxia of speech (DAS) in children is a speech disorder, supposed to have a neurological origin, which is commonly considered to result from particular deficits in speech processing (i.e., phonological planning, motor programming). However, the label DAS has often been used as

  5. The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease

    Science.gov (United States)

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-01-01

    Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…

  6. Speech analysis and synthesis based on pitch-synchronous segmentation of the speech waveform

    Science.gov (United States)

    Kang, George S.; Fransen, Lawrence J.

    1994-11-01

    This report describes a new speech analysis/synthesis method. This new technique does not attempt to model the human speech production mechanism. Instead, we represent the speech waveform directly in terms of the speech waveform defined in a pitch period. A significant merit of this approach is the complete elimination of pitch interference because each pitch-synchronously segmented waveform does not include a waveform discontinuity. One application of this new speech analysis/synthesis method is the alteration of speech characteristics directly on raw speech. With the increased use of man-made speech in tactical voice message systems and virtual reality environments, such a speech generation tool is highly desirable. Another application is speech encoding operation at low data rates (2400 b/s or less). According to speech intelligibility tests, our new 2400 b/s encoder outperforms the current 2400-b/s LPC. This is also true in noisy environments. Because most tactical platforms are noisy (e.g., helicopter, high-performance aircraft, tank, destroyer), our 2400-b/s speech encoding technique will make tactical voice communication more effective; it will become an indispensable capability for future C4I.

  7. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  8. Perceived liveliness and speech comprehensibility in aphasia : the effects of direct speech in auditory narratives

    NARCIS (Netherlands)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in 'healthy' communication direct speech constructions contribute to the liveliness, and indirectly to

  9. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  10. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  11. E-learning-based speech therapy: a web application for speech training.

    NARCIS (Netherlands)

    Beijer, L.J.; Rietveld, T.C.; Beers, M.M. van; Slangen, R.M.; Heuvel, H. van den; Swart, B.J.M. de; Geurts, A.C.H.

    2010-01-01

    Abstract In The Netherlands, a web application for speech training, E-learning-based speech therapy (EST), has been developed for patients with dysarthria, a speech disorder resulting from acquired neurological impairments such as stroke or Parkinson's disease. In this report, the EST infrastructure

  12. Spotlight on Speech Codes 2012: The State of Free Speech on Our Nation's Campuses

    Science.gov (United States)

    Foundation for Individual Rights in Education (NJ1), 2012

    2012-01-01

    The U.S. Supreme Court has called America's colleges and universities "vital centers for the Nation's intellectual life," but the reality today is that many of these institutions severely restrict free speech and open debate. Speech codes--policies prohibiting student and faculty speech that would, outside the bounds of campus, be…

  13. Grommets and speech at three and six years in children born with total cleft or cleft palate.

    Science.gov (United States)

    Ezzi, Oumama El; Herzog, Georges; Broome, Martin; Trichet-Zbinden, Chantal; Hohlfeld, Judith; Cherpillod, Jacques; de Buys Roessingh, Anthony S

    2015-12-01

    Grommets may be considered as the treatment of choice for otitis media with effusion (OME) in children born with a cleft. But the timing and precise indications to use them are not well established. The aim of the study is to compare the results of hearing and speech controls at three and six year-old in children born with total cleft or cleft palate in the presence or not of grommets. This retrospective study concerns non syndromic children born between 1994 and 2006 and operated for a unilateral cleft lip palate (UCLP) or a cleft palate (CP) alone, by one surgeon with the same schedule of operations (Malek procedure). We compared the results of clinical observation, tympanometry, audiometry and nasometry at three and six year-old. The Borel-Maisonny classification was used to evaluate the velar insufficiency. None of the children had preventive grommets. The Fisher Exact Test was used for statistical analysis with p<0.05 considered as significant. Seventy-seven patients were analyzed in both groups. Abnormal hearing status was statistically more frequent in children with UCLP compared to children with CP, at three and six years (respectively, 80-64%, p<0.03 and 78-60%, p<0.02), with the use of grommets at six years in 43% of cases in both groups. Improvement of hearing status between three and six year-old was present in 5% of children with UCLP and 9% with CP, without the use of grommets. The use of grommets between three and six year-old was not associated to any improvement of hearing status or speech results children with UCLP or with CP, with a low risk of tympanosclerosis. These results favor the use of grommets before the age of three, taking into account the risk of long term tympanosclerosis. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. A neural mechanism for recognizing speech spoken by different speakers

    NARCIS (Netherlands)

    Kreitewolf, Jens; Gaudrain, Etienne; von Kriegstein, Katharina

    2014-01-01

    Understanding speech from different speakers is a sophisticated process, particularly because the same acoustic parameters convey important information about both the speech message and the person speaking. How the human brain accomplishes speech recognition under such conditions is unknown. One

  15. Indonesian Automatic Speech Recognition For Command Speech Controller Multimedia Player

    Directory of Open Access Journals (Sweden)

    Vivien Arief Wardhany

    2014-12-01

    Full Text Available The purpose of multimedia devices development is controlling through voice. Nowdays voice that can be recognized only in English. To overcome the issue, then recognition using Indonesian language model and accousticc model and dictionary. Automatic Speech Recognizier is build using engine CMU Sphinx with modified english language to Indonesian Language database and XBMC used as the multimedia player. The experiment is using 10 volunteers testing items based on 7 commands. The volunteers is classifiedd by the genders, 5 Male & 5 female. 10 samples is taken in each command, continue with each volunteer perform 10 testing command. Each volunteer also have to try all 7 command that already provided. Based on percentage clarification table, the word “Kanan” had the most recognize with percentage 83% while “pilih” is the lowest one. The word which had the most wrong clarification is “kembali” with percentagee 67%, while the word “kanan” is the lowest one. From the result of Recognition Rate by male there are several command such as “Kembali”, “Utama”, “Atas “ and “Bawah” has the low Recognition Rate. Especially for “kembali” cannot be recognized as the command in the female voices but in male voice that command has 4% of RR this is because the command doesn’t have similar word in english near to “kembali” so the system unrecognize the command. Also for the command “Pilih” using the female voice has 80% of RR but for the male voice has only 4% of RR. This problem is mostly because of the different voice characteristic between adult male and female which male has lower voice frequencies (from 85 to 180 Hz than woman (165 to 255 Hz.The result of the experiment showed that each man had different number of recognition rate caused by the difference tone, pronunciation, and speed of speech. For further work needs to be done in order to improving the accouracy of the Indonesian Automatic Speech Recognition system

  16. Potenciais evocados auditivos de tronco encefálico de ex-usuários de drogas Brain stem evoked response audiometry of former drug users

    Directory of Open Access Journals (Sweden)

    Tainara Milbradt Weich

    2012-10-01

    Full Text Available As drogas ilícitas são conhecidas pelos seus efeitos deletérios no sistema nervoso central; no entanto, elas também podem atingir o sistema auditivo, provocando alterações. OBJETIVOS: Analisar e comparar os resultados dos potenciais evocados auditivos de tronco encefálico (PEATE de frequentadores de grupos de apoio a ex-usuários de drogas. MÉTODO: Estudo transversal, não experimental, descritivo e quantitativo. A amostra foi composta por 17 indivíduos divididos conforme o tipo de droga mais consumida: 10 indivíduos no grupo maconha (G1 e sete no grupo crack/cocaína (G2. Eles foram subdivididos pelo tempo de uso de drogas: um a cinco anos, seis a 10 anos e mais que 15 anos. A avaliação foi feita por meio de anamnese, audiometria tonal liminar, medidas de imitância acústica e PEATE. RESULTADOS: Ao comparar os resultados de G1 e G2, independente do tempo de uso de drogas, não se observou diferença estatisticamente significante nas latências absolutas e nos intervalos interpicos. No entanto, apenas cinco dos 17 indivíduos tiveram PEATE com resultados adequados para a faixa etária. CONCLUSÃO: Independentemente do tempo de utilização das drogas, o uso de maconha e crack/cocaína pode provocar alterações difusas no tronco encefálico, comprometendo a transmissão do estímulo auditivo.Illicit drugs are known for their deleterious effects upon the central nervous system and more specifically for how they adversely affect hearing. OBJECTIVE: This study aims to analyze and compare the hearing complaints and the results of brainstem evoked response audiometry (BERA of former drug user support group goers. METHODS: This is a cross-sectional non-experimental descriptive quantitative study. The sample consisted of 17 subjects divided by their preferred drug of use. Ten individuals were placed in the marijuana group (G1 and seven in the crack/cocaine group (G2. The subjects were further divided based on how long they had been using

  17. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    OpenAIRE

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2014-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it...

  18. Comparison of the Effectiveness of Monitoring Cisplatin-Induced Ototoxicity with Extended High-Frequency Pure-Tone Audiometry or Distortion-Product Otoacoustic Emission

    Science.gov (United States)

    Yu, Kwang Kyu; Choi, Chi Ho; An, Yong-Hwi; Kwak, Min Young; Gong, Soo Jung; Yoon, Sang Won

    2014-01-01

    Background and Objectives To compare the effectiveness of monitoring cisplatin-induced ototoxicity in adult patients using extended high-frequency pure-tone audiometry (EHF-PTA) or distortion-product otoacoustic emission (DP-OAE) and to evaluate the concurrence of ototoxicity and nephrotoxicity in cisplatin-treated patients. Subjects and Methods EHF-PTA was measured at frequencies of 0.25, 0.5, 1, 2, 3, 4, 6, 8, 9, 11.2, 12.5, 14, 16, 18, and 20 kHz and DP-OAE at frequencies of 0.5, 0.75, 1, 1.5, 2, 3, 4, 6, and 8 kHz in cisplatin-treated patients (n=10). Baseline evaluations were made immediately before chemotherapy and additional tests were performed before each of six cycles of cisplatin treatment. Laboratory tests to monitor nephrotoxicity were included before every cycle of chemotherapy. Results Four of 10 patients showed threshold changes on EHF-PTA. Five of 10 patients showed reductions in DP-OAE, but one was a false-positive result. The results of EHF-PTA and DP-OAE were consistent in two patients. Only one patient displayed nephrotoxicity on laboratory tests after the third cycle. Conclusions In our study, the incidence rate of cisplatin-induced ototoxicity was 40% with EHF-PTA or DP-OAE. Although both EHF-PTA and DP-OAE showed the same sensitivity in detecting ototoxicity, they did not produce the same results in all patients. These two hearing tests could be used to complement one another. Clinicians should use both tests simultaneously in every cycle of chemotherapy to ensure the detection of ototoxicity. PMID:25279227

  19. Changes in tonal audiometry in children with progressive sensorineural hearing loss and history of Neonatal Intensive Care Unit discharge. A 20 year long-term follow-up.

    Science.gov (United States)

    Martínez-Cruz, Carlos F; Poblano, Adrián; García-Alonso Themann, Patricia

    2017-10-01

    Newborns from Neonatal intensive care units (NICU) are at high-risk for sensorineural hearing loss (SNHL) a follow-up is needed for early diagnosis and intervention. Our objective here was to describe the features and changes of SNHL at different periods during a follow-up of almost 20 years. Risk factors for SNHL during development were analyzed. The audiological examination included: Brainstem auditory evoked potentials (BAEP), and Transient evoked otoacoustic emissions (TEOAE). At birth; tonal audiometry (between 125 and 8000 Hz), and tympanometry were performed at 5, 10, 15, and 20 years of age. Sixty-five percent of cases presented bilateral absence of BAEP. At 5 years of age, the most frequent SNHL level was severe (42.5%), followed by moderate (22.5%), and profound level (20%), in all cases, the SNHL was symmetrical with a predominance of lesion for the high frequencies. Exchange transfusion was associated with a higher degree of SNHL (OR = 6.00, CI = 1.11-32.28, p < 0.02). In 55%, SNHL remained stable, but in 40% of the cases it was progressive. At the end of the study six cases with moderate loss progressed to the severe level and seven cases with severe level progressed to profound. Forty percent of infants with SNHL discharged from NICU may present a progression in the hearing loss. Exchange transfusion was associated with a higher degree of SNHL. NICU graduates with SNHL merit a long-term audiological follow-up throughout their lifespan. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. The role of screening audiometry in the management of otitis media with effusion in children with cleft palate in northern ireland.

    Science.gov (United States)

    Eastwood, Mary P; Hoo, Koh H; Adams, David; Hill, Christopher

    2014-07-01

    To determine uptake and outcome of hearing screening in the cleft palate population in Northern Ireland (NI) and the rate of ventilation tube (VT) insertion over a 3-year period. In NI, hearing screening is offered in the neonatal period, at 9 months in the community, and at 2.5 years in the joint cleft clinic. Patients : Eighty-five children with cleft palate born between 2006 and 2008 in NI were eligible for all three screenings. A retrospective case note review was performed of tympanograms, audiometry, and VT insertion rates at each of the three time points. Results : In the neonatal period, all patients eligible were screened; 66 (77.6%) patients passed the screening, with 19 patients (22.4%) failing, resulting in direct referral to ENT for consideration of VT. Results of the 9-month community screening were not made routinely available to the regional cleft service. At the 2.5-year clinic screening, all attending patients (n = 80) had documented screening. Fifty-two (65%) patients passed screening, with 28 patients (35%) failing screening. Forty-six patients (57.5%) had documented VT, and 9 (11.25%) were awaiting ENT review for consideration of VT. Ventilation tubes are not routinely inserted at the time of cleft repair in the NI population, and 57.5% of our cleft population has ventilation tubes inserted by 2.5 years. Cleft patients in NI have regular routine hearing assessments, and our current practice avoids universal ventilation tube insertion while identifying those who need further hearing management. Further research is needed to reach an international consensus on the insertion of VT in cleft patients.

  1. Speech Planning Happens before Speech Execution: Online Reaction Time Methods in the Study of Apraxia of Speech

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa

    2012-01-01

    Purpose: The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Method: Following a brief…

  2. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  3. Speech perception as an active cognitive process

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-03-01

    Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or

  4. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  5. Pet-directed speech draws adult dogs’ attention more efficiently than Adult-directed speech

    OpenAIRE

    Jeannin, Sarah; Gilbert, Caroline; Amy, Mathieu; Leboucher, Gérard

    2017-01-01

    International audience; Humans speak to dogs using a special speech register called Pet-Directed Speech (PDS) which is very similar to Infant-Directed Speech (IDS) used by parents when talking to young infants. These two type of speech share prosodic features that are distinct from the typical Adult-Directed Speech (ADS): a high pitched voice and an increased pitch variation. So far, only one study has investigated the effect of PDS on dogs' attention. We video recorded 44 adult pet dogs and ...

  6. Speech-Based Information Retrieval for Digital Libraries

    National Research Council Canada - National Science Library

    Oard, Douglas W

    1997-01-01

    Libraries and archives collect recorded speech and multimedia objects that contain recorded speech, and such material may comprise a substantial portion of the collection in future digital libraries...

  7. [Electrographic Correlations of Inner Speech].

    Science.gov (United States)

    Kiroy, V N; Bakhtin, O M; Minyaeva, N R; Lazurenko, D M; Aslanyan, E V; Kiroy, R I

    2015-01-01

    On the purpose to detect in EEG specific patterns associated with any verbal performance the gamma activity were investigated. The technique which allows the subject to initiate the mental pronunciation of words and phrases (inner speech) was created. Wavelet analysis of EEG has been experimentally demonstrated that the preparation and implementation stages are related to the specific spatio-temporal patterns in frequency range 64-68 Hz. Sustainable reproduction and efficient identification of such patterns can solve the fundamentally problem of alphabet control commands formation for Brain Computer Interface and Brain to Braine Interface systems.

  8. The effects of behavioral speech therapy on speech sound production with adults who have cochlear implants.

    Science.gov (United States)

    Pomaville, Frances M; Kladopoulos, Chris N

    2013-04-01

    In this study, the authors examined the treatment efficacy of a behavioral speech therapy protocol for adult cochlear implant recipients. The authors used a multiple-baseline, across-behaviors and -participants design to examine the effectiveness of a therapy program based on behavioral principles and methods to improve the production of target speech sounds in 3 adults with cochlear implants. The authors included probe items in a baseline protocol to assess generalization of target speech sounds to untrained exemplars. Pretest and posttest scores from the Arizona Articulation Proficiency Scale, Third Revision (Arizona-3; Fudala, 2000) and measurement of speech errors during spontaneous speech were compared, providing additional measures of target behavior generalization. The results of this study provided preliminary evidence supporting the overall effectiveness and efficiency of a behavioral speech therapy program in increasing percent correct speech sound production in adult cochlear implant recipients. The generalization of newly trained speech skills to untrained words and to spontaneous speech was demonstrated. These preliminary findings support the application of behavioral speech therapy techniques for training speech sound production in adults with cochlear implants. Implications for future research and the development of aural rehabilitation programs for adult cochlear implant recipients are discussed.

  9. Method and apparatus for obtaining complete speech signals for speech recognition applications

    Science.gov (United States)

    Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)

    2009-01-01

    The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.

  10. The Effects of TV on Speech Education

    Science.gov (United States)

    Gocen, Gokcen; Okur, Alpaslan

    2013-01-01

    Generally, the speaking aspect is not properly debated when discussing the positive and negative effects of television (TV), especially on children. So, to highlight this point, this study was first initialized by asking the question: "What are the effects of TV on speech?" and secondly, to transform the effects that TV has on speech in…

  11. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  12. Modelling context in automatic speech recognition

    NARCIS (Netherlands)

    Wiggers, P.

    2008-01-01

    Speech is at the core of human communication. Speaking and listing comes so natural to us that we do not have to think about it at all. The underlying cognitive processes are very rapid and almost completely subconscious. It is hard, if not impossible not to understand speech. For computers on the

  13. Speech, intent, and the chilling effect

    National Research Council Canada - National Science Library

    Kendrick, Leslie

    2013-01-01

    ..., the comparison with defamation is instructive. Under the logic of the chilling effect, the expression most likely to be chilled is expression at the margins of protection. (132) For the defamatory speech governed by Sullivan, this marginal speech consists of possibly false but ultimately true information about public figures regarding a matter of pu...

  14. Anatomy and Physiology of the Speech Mechanism.

    Science.gov (United States)

    Sheets, Boyd V.

    This monograph on the anatomical and physiological aspects of the speech mechanism stresses the importance of a general understanding of the process of verbal communication. Contents include "Positions of the Body,""Basic Concepts Linked with the Speech Mechanism,""The Nervous System,""The Respiratory System--Sound-Power Source,""The…

  15. INVERSE FILTERING TECHNIQUES IN SPEECH ANALYSIS

    African Journals Online (AJOL)

    Dr Obe

    ABSTRACT. This paper reviews certain speech analytical techniques to which the label 'inverse filtering' has been applied. The unifying features of these techniques are presented, namely: 1. a basis in the source-filter theory of speech production,. 2. the use of a network whose transfer function is the inverse of the transfer ...

  16. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  17. Performing speech recognition research with hypercard

    Science.gov (United States)

    Shepherd, Chip

    1993-01-01

    The purpose of this paper is to describe a HyperCard-based system for performing speech recognition research and to instruct Human Factors professionals on how to use the system to obtain detailed data about the user interface of a prototype speech recognition application.

  18. Toddlers' recognition of noise-vocoded speech

    Science.gov (United States)

    Newman, Rochelle; Chatterjee, Monita

    2013-01-01

    Despite their remarkable clinical success, cochlear-implant listeners today still receive spectrally degraded information. Much research has examined normally hearing adult listeners' ability to interpret spectrally degraded signals, primarily using noise-vocoded speech to simulate cochlear implant processing. Far less research has explored infants' and toddlers' ability to interpret spectrally degraded signals, despite the fact that children in this age range are frequently implanted. This study examines 27-month-old typically developing toddlers' recognition of noise-vocoded speech in a language-guided looking study. Children saw two images on each trial and heard a voice instructing them to look at one item (“Find the cat!”). Full-spectrum sentences or their noise-vocoded versions were presented with varying numbers of spectral channels. Toddlers showed equivalent proportions of looking to the target object with full-speech and 24- or 8-channel noise-vocoded speech; they failed to look appropriately with 2-channel noise-vocoded speech and showed variable performance with 4-channel noise-vocoded speech. Despite accurate looking performance for speech with at least eight channels, children were slower to respond appropriately as the number of channels decreased. These results indicate that 2-yr-olds have developed the ability to interpret vocoded speech, even without practice, but that doing so requires additional processing. These findings have important implications for pediatric cochlear implantation. PMID:23297920

  19. Pronunciation Modeling for Large Vocabulary Speech Recognition

    Science.gov (United States)

    Kantor, Arthur

    2010-01-01

    The large pronunciation variability of words in conversational speech is one of the major causes of low accuracy in automatic speech recognition (ASR). Many pronunciation modeling approaches have been developed to address this problem. Some explicitly manipulate the pronunciation dictionary as well as the set of the units used to define the…

  20. Stimulated Deep Neural Network for Speech Recognition

    Science.gov (United States)

    2016-09-08

    approaches yield state-of-the-art performance in a range of tasks, including speech recognition . However, the parameters of the network are hard to analyze...advantage of the smoothness con- straints that stimulated training offers. The approaches are eval- uated on two large vocabulary speech recognition