WorldWideScience

Sample records for spontaneous speech samples

  1. Qualifying phrases as a measure of spontaneity in speech.

    Science.gov (United States)

    Weintraub, W; Plaut, S M

    1985-11-01

    Although investigators have attempted to define the paralinguistic characteristics of spontaneous speech, there have been no systematic attempts to study its verbal reflections. An experiment comparing extemporaneous and impromptu speech samples of 10 freshman medical students showed that, of 10 verbal categories, only qualifying phrases significantly differentiated the two levels of spontaneity. A second study compared post-World War II presidential communications of different degrees of spontaneity. Speech samples were taken from inaugural addresses of seven presidents, and from both introductory remarks and responses to questions at their press conferences. The proportion of qualifying phrases significantly decreased as the amount of preparation increased, confirming the results of the student experiment. The use of qualifying phrases appears to represent, in part, an attempt by the speaker to avoid silence while retrieving and encoding memories from long-term storage.

  2. Motivational Projections of Russian Spontaneous Speech

    Directory of Open Access Journals (Sweden)

    Galina M. Shipitsina

    2017-06-01

    Full Text Available The article deals with the semantic, pragmatic and structural features of words, phrases, dialogues motivation, in the contemporary Russian popular speech. These structural features are characterized by originality and unconventional use. Language material is the result of authors` direct observation of spontaneous verbal communication between people of different social and age groups. The words and remarks were analyzed in compliance with the communication system of national Russian language and cultural background of popular speech. Studies have discovered that in spoken discourse there are some other ways to increase the expression statement. It is important to note that spontaneous speech identifies lacunae in the nominative language and its vocabulary system. It is proved, prefixation is also effective and regular way of the same action presenting. The most typical forms, ways and means to update language resources as a result of the linguistic creativity of native speakers were identified.

  3. Language in individuals with left hemisphere tumors: Is spontaneous speech analysis comparable to formal testing?

    Science.gov (United States)

    Rofes, Adrià; Talacchi, Andrea; Santini, Barbara; Pinna, Giampietro; Nickels, Lyndsey; Bastiaanse, Roelien; Miceli, Gabriele

    2018-01-31

    The relationship between spontaneous speech and formal language testing in people with brain tumors (gliomas) has been rarely studied. In clinical practice, formal testing is typically used, while spontaneous speech is less often evaluated quantitatively. However, spontaneous speech is quicker to sample and may be less prone to test/retest effects, making it a potential candidate for assessing language impairments when there is restricted time or when the patient is unable to undertake prolonged testing. To assess whether quantitative spontaneous speech analysis and formal testing detect comparable language impairments in people with gliomas. Specifically, we addressed (a) whether both measures detected comparable language impairments in our patient sample; and (b) which language levels, assessment times, and spontaneous speech variables were more often impaired in this subject group. Five people with left perisylvian gliomas performed a spontaneous speech task and a formal language assessment. Tests were administered before surgery, within a week after surgery, and seven months after surgery. Performance on spontaneous speech was compared with that of 15 healthy speakers. Language impairments were detected more often with both measures than with either measure independently. Lexical-semantic impairments were more common than phonological and grammatical impairments, and performance was equally impaired across assessment time points. Incomplete sentences and phonological paraphasias were the most common error types. In our sample both spontaneous speech analysis and formal testing detected comparable language impairments. Currently, we suggest that formal testing remains overall the better option, except for cases in which there are restrictions on testing time or the patient is too tired to undergo formal testing. In these cases, spontaneous speech may provide a viable alternative, particularly if automated analysis of spontaneous speech becomes more readily

  4. Experiments on Detection of Voiced Hesitations in Russian Spontaneous Speech

    Directory of Open Access Journals (Sweden)

    Vasilisa Verkhodanova

    2016-01-01

    Full Text Available The development and popularity of voice-user interfaces made spontaneous speech processing an important research field. One of the main focus areas in this field is automatic speech recognition (ASR that enables the recognition and translation of spoken language into text by computers. However, ASR systems often work less efficiently for spontaneous than for read speech, since the former differs from any other type of speech in many ways. And the presence of speech disfluencies is its prominent characteristic. These phenomena are an important feature in human-human communication and at the same time they are a challenging obstacle for the speech processing tasks. In this paper we address an issue of voiced hesitations (filled pauses and sound lengthenings detection in Russian spontaneous speech by utilizing different machine learning techniques, from grid search and gradient descent in rule-based approaches to such data-driven ones as ELM and SVM based on the automatically extracted acoustic features. Experimental results on the mixed and quality diverse corpus of spontaneous Russian speech indicate the efficiency of the techniques for the task in question, with SVM outperforming other methods.

  5. Deixis in Spontaneous Speech of Jordanian Urban Arabic Native Speakers

    Science.gov (United States)

    Sa'aida, Zainab

    2017-01-01

    This study aims at describing types and usages of deixis in the speech of Jordanian Urban Arabic native speakers. The present study was conducted in different settings which researcher's family members, friends, colleagues, and acquaintances took part in. Data of the study were collected through observing spontaneous speech of native speakers of…

  6. Corticomuscular Coherence Is Tuned to the Spontaneous Rhythmicity of Speech at 2-3 Hz

    OpenAIRE

    Ruspantini, I.; Saarinen, T.; Belardinelli, P.; Jalava, A.; Parviainen, T.; Kujala, J.; Salmelin, Riitta

    2012-01-01

    Human speech features rhythmicity that frames distinctive, fine-grained speech patterns. Speech can thus be counted among rhythmic motor behaviors that generally manifest characteristic spontaneous rates. However, the critical neural evidence for tuning of articulatory control to a spontaneous rate of speech has not been uncovered. The present study examined the spontaneous rhythmicity in speech production and its relationship to cortex–muscle neurocommunication, which is essential for speech...

  7. Phonetic and Prosodic Characteristics of Disfluencies in French Spontaneous Speech

    OpenAIRE

    Christodoulides, George; Avanzi, Mathieu; 14th Conference on Laboratory Phonology

    2014-01-01

    A key difference between spontaneous speech and controlled laboratory speech is the prevalence of disfluencies in the former (e.g. Shriberg 1994). Disfluencies typically signal production problems, as the speaker incrementally constructs his message (Levelt 1989). However, in specific contexts, these events may be used as communicative devices, e.g. in order to manage dialogue interaction (Moniz et al. 2009) or indicate information status (Arnold et al. 2003). Disfluencies have recently attra...

  8. Tactile Modulation of Emotional Speech Samples

    Directory of Open Access Journals (Sweden)

    Katri Salminen

    2012-01-01

    Full Text Available Traditionally only speech communicates emotions via mobile phone. However, in daily communication the sense of touch mediates emotional information during conversation. The present aim was to study if tactile stimulation affects emotional ratings of speech when measured with scales of pleasantness, arousal, approachability, and dominance. In the Experiment 1 participants rated speech-only and speech-tactile stimuli. The tactile signal mimicked the amplitude changes of the speech. In the Experiment 2 the aim was to study whether the way the tactile signal was produced affected the ratings. The tactile signal either mimicked the amplitude changes of the speech sample in question, or the amplitude changes of another speech sample. Also, concurrent static vibration was included. The results showed that the speech-tactile stimuli were rated as more arousing and dominant than the speech-only stimuli. The speech-only stimuli were rated as more approachable than the speech-tactile stimuli, but only in the Experiment 1. Variations in tactile stimulation also affected the ratings. When the tactile stimulation was static vibration the speech-tactile stimuli were rated as more arousing than when the concurrent tactile stimulation was mimicking speech samples. The results suggest that tactile stimulation offers new ways of modulating and enriching the interpretation of speech.

  9. Methodological Choices in Rating Speech Samples

    Science.gov (United States)

    O'Brien, Mary Grantham

    2016-01-01

    Much pronunciation research critically relies upon listeners' judgments of speech samples, but researchers have rarely examined the impact of methodological choices. In the current study, 30 German native listeners and 42 German L2 learners (L1 English) rated speech samples produced by English-German L2 learners along three continua: accentedness,…

  10. Deixis in Spontaneous Speech of Jordanian Urban Arabic Native Speakers

    Directory of Open Access Journals (Sweden)

    Zainab Sa'aida

    2017-02-01

    Full Text Available This study aims at describing types and usages of deixis in the speech of Jordanian Urban Arabic native speakers. The present study was conducted in different settings which researcher’s family members, friends, colleagues, and acquaintances took part in. Data of the study were collected through observing spontaneous speech of native speakers of Jordanian Urban Arabic. The data consist of transcriptions of deictic expressions, which were categorised into groups according to the types or usages of deictic words. The data were translated and transliterated by the researcher. The International Phonetic Alphabet symbols were used to transcribe the data. Findings of the study show that there are five types of deictic expressions in Jordanian Urban Arabic: Personal, spatial, temporal, discourse and social deixis. The current study has also described the various usages of deictic words in Jordanian Urban Arabic: gestural, symbolic, and non-deictic usage.

  11. Production planning and coronal stop deletion in spontaneous speech

    Directory of Open Access Journals (Sweden)

    James Tanner

    2017-06-01

    Full Text Available Many phonological processes can be affected by segmental context spanning word boundaries, which often lead to variable outcomes. This paper tests the idea that some of this variability can be explained by reference to production planning. We examine coronal stop deletion (CSD, a variable process conditioned by preceding and upcoming phonological context, in a corpus of spontaneous British English speech, as a means of investigating a number of variables associated with planning: Prosodic boundary strength, word frequency, conditional probability of the following word, and speech rate. From the perspective of production planning, (1 prosodic boundaries should affect deletion rate independently of following context; (2 given the locality of production planning, the effect of the following context should decrease at stronger prosodic boundaries; and (3 other factors affecting planning scope should modulate the effect of upcoming phonological material above and beyond the modulating effect of prosodic boundaries. We build a statistical model of CSD realization, using pause length as a quantitative proxy for boundary strength, and find support for these predictions. These findings are compatible with the hypothesis that the locality of production planning constrains variability in speech production, and have practical implications for work on CSD and other variable processes.

  12. Automatic recognition of spontaneous emotions in speech using acoustic and lexical features

    NARCIS (Netherlands)

    Raaijmakers, S.; Truong, K.P.

    2008-01-01

    We developed acoustic and lexical classifiers, based on a boosting algorithm, to assess the separability on arousal and valence dimensions in spontaneous emotional speech. The spontaneous emotional speech data was acquired by inviting subjects to play a first-person shooter video game. Our acoustic

  13. Specific acoustic models for spontaneous and dictated style in indonesian speech recognition

    Science.gov (United States)

    Vista, C. B.; Satriawan, C. H.; Lestari, D. P.; Widyantoro, D. H.

    2018-03-01

    The performance of an automatic speech recognition system is affected by differences in speech style between the data the model is originally trained upon and incoming speech to be recognized. In this paper, the usage of GMM-HMM acoustic models for specific speech styles is investigated. We develop two systems for the experiments; the first employs a speech style classifier to predict the speech style of incoming speech, either spontaneous or dictated, then decodes this speech using an acoustic model specifically trained for that speech style. The second system uses both acoustic models to recognise incoming speech and decides upon a final result by calculating a confidence score of decoding. Results show that training specific acoustic models for spontaneous and dictated speech styles confers a slight recognition advantage as compared to a baseline model trained on a mixture of spontaneous and dictated training data. In addition, the speech style classifier approach of the first system produced slightly more accurate results than the confidence scoring employed in the second system.

  14. Characteristics of Swahili-English bilingual agrammatic spontaneous speech and the consequences for understanding agrammatic aphasia

    NARCIS (Netherlands)

    Abuom, Tom O.; Bastiaanse, Roelien

    Most studies on spontaneous speech of individuals with agrammatism have focused almost exclusively on monolingual individuals. There is hardly any previous research on bilinguals, especially of structurally different languages; and none on characterization of agrammatism in Swahili. The current

  15. Arousal and Valence prediction in spontaneous emotional speech: felt versus perceived emotion

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.; Jong, F.M.G. de

    2009-01-01

    In this paper, we describe emotion recognition experiments carried out for spontaneous affective speech with the aim to compare the added value of annotation of felt emotion versus annotation of perceived emotion. Using speech material available in the TNO-GAMING corpus (a corpus containing

  16. A Comparison of the Use of Glottal Fry in the Spontaneous Speech of Young and Middle-Aged American Women.

    Science.gov (United States)

    Oliveira, Gisele; Davidson, Ashira; Holczer, Rachelle; Kaplan, Sara; Paretzky, Adina

    2016-11-01

    To compare vocal fry use in spontaneous speech of young and middle-aged American women. This is a cross-sectional study. Subjects were 40 American women; 20 aged 18-25 years (mean = 22.9 years) and 20 aged 35-50 years (mean = 43.4 years). Participants were asked to describe all the steps involved in making a peanut butter and jelly sandwich and in doing laundry. Acoustical analysis of selected parameters and sentence position of vocal fry occurrences were performed. The acoustic parameters analyzed were mean, minimum and maximum fundamental frequency (F0), glottal fry/minute ratio, and sentence position of glottal fry. Values of minimum fundamental frequency clearly show that there was vocal fry in the participants' spontaneous speech samples. The average minimum F0 was 74.0 Hz (standard deviation [SD] = 5.6) for the younger women and 73.10 Hz (SD = 6.7) for the middle-aged women (P = 0.527). The mean glottal fry for the medial position and for the final position was similar for both groups. The mean glottal fry/minute ratio for young women was 13.8 (SD = 7.0), whereas for middle-aged women was 11.3 (SD = 7.5; P = 0.402). This study showed that all participants had at least one episode of glottal fry in their spontaneous speech sample. Both groups presented with vocal fry in their spontaneous speech, showing that vocal fry is present in the speech of young and middle-aged women. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. A Comparison between Three Methods of Language Sampling: Freeplay, Narrative Speech and Conversation

    Directory of Open Access Journals (Sweden)

    Yasser Rezapour

    2011-10-01

    Full Text Available Objectives: The spontaneous language sample analysis is an important part of the language assessment protocol. Language samples give us useful information about how children use language in the natural situations of daily life. The purpose of this study was to compare Conversation, Freeplay, and narrative speech in aspects of Mean Length of Utterance (MLU, Type-token ratio (TTR, and the number of utterances. Methods: By cluster sampling method, a total of 30 Semnanian five-year-old boys with normal speech and language development were selected from the active kindergartens in Semnan city. Conversation, Freeplay, and narrative speech were three applied language sample elicitation methods to obtain 15 minutes of children’s spontaneous language samples. Means for MLU, TTR, and the number of utterances are analyzed by dependent ANOVA. Results: The result showed no significant difference in number of elicited utterances among these three language sampling methods. Narrative speech elicited longer MLU than freeplay and conversation, and compared to freeplay and narrative speech, conversation elicited higher TTR. Discussion: Results suggest that in the clinical assessment of the Persian-language children, it is better to use narrative speech to elicit longer MLU and to use conversation to elicit higher TTR.

  18. A Speech Recognition-based Solution for the Automatic Detection of Mild Cognitive Impairment from Spontaneous Speech.

    Science.gov (United States)

    Toth, Laszlo; Hoffmann, Ildiko; Gosztolya, Gabor; Vincze, Veronika; Szatloczki, Greta; Banreti, Zoltan; Pakaski, Magdolna; Kalman, Janos

    2018-01-01

    Even today the reliable diagnosis of the prodromal stages of Alzheimer's disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive decline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech signals, first manually (using the Praat software), and then automatically, with an automatic speech recognition (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process - that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. The temporal analysis of spontaneous speech

  19. THE EMPIRICAL RESEARCH OF PROSODIC ORGANIZATION OF TROPES IN SPONTANEOUS PUBLIC SPEECH

    Directory of Open Access Journals (Sweden)

    Zubkova, O.S.

    2016-09-01

    Full Text Available The article describes the features of prosodic organization of tropes in spontaneous public speech, identified on the basis of empirical research, and analyzes the role of melody and intonema in the understanding of tropes, determines the value of a pause in the formation and perception of tropes, and studies the effect of prosodic factors on the interpretation of tropes.

  20. Verb retrieval in action naming and spontaneous speech in agrammatic and anemic aphasia

    NARCIS (Netherlands)

    Bastiaanse, R; Jonkers, R

    1998-01-01

    The production of verbs in an action naming test and in spontaneous speech was evaluated in 16 aphasic patients: eight agrammatics and eight anomics. Action naming was also compared to object naming. The action naming test was controlled for factors known to be relevant for verb retrieval (i.e. word

  1. A Danish phonetically annotated spontaneous speech corpus (DanPASS)

    DEFF Research Database (Denmark)

    Grønnum, Nina

    2009-01-01

    A corpus is described consisting of non-scripted monologues and dialogues, recorded by 27 speakers, comprising a total of 73,227 running words, corresponding to 9 h and 46 min of speech. The monologues were recorded as one-way communication with an unseen partner where the speaker performed three......-narrow phonetic notation, a symbolic representation of the pitch relation between each stressed and post-tonic syllable, and a symbolic representation of the phrasal intonation....

  2. Finite Verb Morphology in the Spontaneous Speech of Dutch-Speaking Children With Hearing Loss.

    Science.gov (United States)

    Hammer, Annemiek; Coene, Martine

    2016-01-01

    In this study, the acquisition of Dutch finite verb morphology is investigated in children with cochlear implants (CIs) with profound hearing loss and in children with hearing aids (HAs) with moderate to severe hearing loss. Comparing these two groups of children increases our insight into how hearing experience and audibility affect the acquisition of morphosyntax. Spontaneous speech samples were analyzed of 48 children with CIs and 29 children with HAs, ages 4 to 7 years. These language samples were analyzed by means of standardized language analysis involving mean length of utterance, the number of finite verbs produced, and target-like subject-verb agreement. The outcomes were interpreted relative to expectations based on the performance of typically developing peers with normal hearing. Outcomes of all measures were correlated with hearing level in the group of HA users and age at implantation in the group of CI users. For both groups, the number of finite verbs that were produced in 50-utterance sample was on par with mean length of utterance and at the lower bound of the normal distribution. No significant differences were found between children with CIs and HAs on any of the measures under investigation. Yet, both groups produced more subject-verb agreement errors than are to be expected for typically developing hearing peers. No significant correlation was found between the hearing level of the children and the relevant measures of verb morphology, both with respect to the overall number of verbs that were used and the number of errors that children made. Within the group of CI users, the outcomes were significantly correlated with age at implantation. When producing finite verb morphology, profoundly deaf children wearing CIs perform similarly to their peers with moderate-to-severe hearing loss wearing HAs. Hearing loss negatively affects the acquisition of subject-verb agreement regardless of the hearing device (CI or HA) that the child is wearing. The

  3. Measuring word complexity in speech screening: single-word sampling to identify phonological delay/disorder in preschool children.

    Science.gov (United States)

    Anderson, Carolyn; Cohen, Wendy

    2012-01-01

    Children's speech sound development is assessed by comparing speech production with the typical development of speech sounds based on a child's age and developmental profile. One widely used method of sampling is to elicit a single-word sample along with connected speech. Words produced spontaneously rather than imitated may give a more accurate indication of a child's speech development. A published word complexity measure can be used to score later-developing speech sounds and more complex word patterns. There is a need for a screening word list that is quick to administer and reliably differentiates children with typically developing speech from children with patterns of delayed/disordered speech. To identify a short word list based on word complexity that could be spontaneously named by most typically developing children aged 3;00-5;05 years. One hundred and five children aged between 3;00 and 5;05 years from three local authority nursery schools took part in the study. Items from a published speech assessment were modified and extended to include a range of phonemic targets in different word positions in 78 monosyllabic and polysyllabic words. The 78 words were ranked both by phonemic/phonetic complexity as measured by word complexity and by ease of spontaneous production. The ten most complex words (hereafter Triage 10) were named spontaneously by more than 90% of the children. There was no significant difference between the complexity measures for five identified age groups when the data were examined in 6-month groups. A qualitative analysis revealed eight children with profiles of phonological delay or disorder. When these children were considered separately, there was a statistically significant difference (p speech from those with delayed or disordered speech patterns. The Triage 10 words can be used as a screening tool for triage and general assessment and have the potential to monitor progress during intervention. Further testing is being undertaken to

  4. Early Recovery of Aphasia through Thrombolysis: The Significance of Spontaneous Speech.

    Science.gov (United States)

    Furlanis, Giovanni; Ridolfi, Mariana; Polverino, Paola; Menichelli, Alina; Caruso, Paola; Naccarato, Marcello; Sartori, Arianna; Torelli, Lucio; Pesavento, Valentina; Manganotti, Paolo

    2018-03-22

    Aphasia is one of the most devastating stroke-related consequences for social interaction and daily activities. Aphasia recovery in acute stroke depends on the degree of reperfusion after thrombolysis or thrombectomy. As aphasia assessment tests are often time-consuming for patients with acute stroke, physicians have been developing rapid and simple tests. The aim of our study is to evaluate the improvement of language functions in the earliest stage in patients treated with thrombolysis and in nontreated patients using our rapid screening test. Our study is a single-center prospective observational study conducted at the Stroke Unit of the University Medical Hospital of Trieste (January-December 2016). Patients treated with thrombolysis and nontreated patients underwent 3 aphasia assessments through our rapid screening test (at baseline, 24 hours, and 72 hours). The screening test assesses spontaneous speech, oral comprehension of words, reading aloud and comprehension of written words, oral comprehension of sentences, naming, repetition of words and a sentence, and writing words. The study included 40 patients: 18 patients treated with thrombolysis and 22 nontreated patients. Both groups improved over time. Among all language parameters, spontaneous speech was statistically significant between 24 and 72 hours (P value = .012), and between baseline and 72 hours (P value = .017). Our study demonstrates that patients treated with thrombolysis experience greater improvement in language than the nontreated patients. The difference between the 2 groups is increasingly evident over time. Moreover, spontaneous speech is the parameter marked by the greatest improvement. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  5. Good and bad in the hands of politicians: spontaneous gestures during positive and negative speech.

    Directory of Open Access Journals (Sweden)

    Daniel Casasanto

    2010-07-01

    Full Text Available According to the body-specificity hypothesis, people with different bodily characteristics should form correspondingly different mental representations, even in highly abstract conceptual domains. In a previous test of this proposal, right- and left-handers were found to associate positive ideas like intelligence, attractiveness, and honesty with their dominant side and negative ideas with their non-dominant side. The goal of the present study was to determine whether 'body-specific' associations of space and valence can be observed beyond the laboratory in spontaneous behavior, and whether these implicit associations have visible consequences.We analyzed speech and gesture (3012 spoken clauses, 1747 gestures from the final debates of the 2004 and 2008 US presidential elections, which involved two right-handers (Kerry, Bush and two left-handers (Obama, McCain. Blind, independent coding of speech and gesture allowed objective hypothesis testing. Right- and left-handed candidates showed contrasting associations between gesture and speech. In both of the left-handed candidates, left-hand gestures were associated more strongly with positive-valence clauses and right-hand gestures with negative-valence clauses; the opposite pattern was found in both right-handed candidates.Speakers associate positive messages more strongly with dominant hand gestures and negative messages with non-dominant hand gestures, revealing a hidden link between action and emotion. This pattern cannot be explained by conventions in language or culture, which associate 'good' with 'right' but not with 'left'; rather, results support and extend the body-specificity hypothesis. Furthermore, results suggest that the hand speakers use to gesture may have unexpected (and probably unintended communicative value, providing the listener with a subtle index of how the speaker feels about the content of the co-occurring speech.

  6. SOFTWARE EFFORT ESTIMATION FRAMEWORK TO IMPROVE ORGANIZATION PRODUCTIVITY USING EMOTION RECOGNITION OF SOFTWARE ENGINEERS IN SPONTANEOUS SPEECH

    Directory of Open Access Journals (Sweden)

    B.V.A.N.S.S. Prabhakar Rao

    2015-10-01

    Full Text Available Productivity is a very important part of any organisation in general and software industry in particular. Now a day’s Software Effort estimation is a challenging task. Both Effort and Productivity are inter-related to each other. This can be achieved from the employee’s of the organization. Every organisation requires emotionally stable employees in their firm for seamless and progressive working. Of course, in other industries this may be achieved without man power. But, software project development is labour intensive activity. Each line of code should be delivered from software engineer. Tools and techniques may helpful and act as aid or supplementary. Whatever be the reason software industry has been suffering with success rate. Software industry is facing lot of problems in delivering the project on time and within the estimated budget limit. If we want to estimate the required effort of the project it is significant to know the emotional state of the team member. The responsibility of ensuring emotional contentment falls on the human resource department and the department can deploy a series of systems to carry out its survey. This analysis can be done using a variety of tools, one such, is through study of emotion recognition. The data needed for this is readily available and collectable and can be an excellent source for the feedback systems. The challenge of recognition of emotion in speech is convoluted primarily due to the noisy recording condition, the variations in sentiment in sample space and exhibition of multiple emotions in a single sentence. The ambiguity in the labels of training set also increases the complexity of problem addressed. The existing models using probabilistic models have dominated the study but present a flaw in scalability due to statistical inefficiency. The problem of sentiment prediction in spontaneous speech can thus be addressed using a hybrid system comprising of a Convolution Neural Network and

  7. Spontaneous speech: Quantifying daily communication in Spanish-speaking individuals with aphasia.

    Directory of Open Access Journals (Sweden)

    Silvia Martínez-Ferreiro

    2015-04-01

    Full Text Available Observable disruptions in spontaneous speech are among the most prominent characteristics of aphasia. The potential of language production analyses in discourse contexts to reveal subtle language deficits has been progressively exploited, becoming essential for diagnosing language disorders (Vermeulen et al., 1989; Goodglass et al., 2000; Prins and Bastiaanse, 2004; Jaecks et al., 2012. Based on previous studies, short and/or fragmentary utterances, and consequently a shorter MLU, are expected in the speech of individuals with aphasia, together with a large proportions of incomplete sentences and a limited use of embeddings. Fewer verbs with a lower diversity (lower type/token ratio and fewer internal arguments are also predicted, as well as a low proportion of inflected verbs (Bastiaanse and Jonkers, 1998. However, this profile comes mainly from the study of individuals with prototypical aphasia types, mainly Broca’s aphasia, raising the question of how accurate spontaneous speech is to pinpoint deficits in individuals with less clear diagnoses. To address this question, we present the results of a spontaneous speech analysis of 25 Spanish-speaking subjects: 10 individuals with aphasia (IWAs, 7 male and 3 female (mean age: 64.2 in neural stable condition (> 1 year post-onset who suffered from a single CVA in the left hemisphere (Rosell, 2005, and 15 non-brain-damaged matched speakers (NBDs. In the aphasia group, 7 of the participants were diagnosed as non-fluent (1 motor aphasia, 4 transcortical motor aphasia or motor aphasia with signs of transcorticality, 2 mixed aphasia with motor predominance, and 3 of them as fluent (mixed aphasia with anomic predominance. The protocol for data collection included semi-standardized interviews, in which participants were asked 3 questions evoking past, present, and future events (last job, holidays, and hobbies. 300 words per participant were analyzed. The MLU over the total 300 words revealed a decreased

  8. Functionally Equivalent Variants in a Non-standard Variety and Their Implications for Universal Grammar: A Spontaneous Speech Corpus

    OpenAIRE

    Evelina Leivada; Evelina Leivada; Elena Papadopoulou; Elena Papadopoulou; Natalia Pavlou

    2017-01-01

    Findings from the field of experimental linguistics have shown that a native speaker may judge a variant that is part of her grammar as unacceptable, but still use it productively in spontaneous speech. The process of eliciting acceptability judgments from speakers of non-standard languages is sometimes clouded by factors akin to prescriptive notions of grammatical correctness. It has been argued that standardization enhances the ability to make clear-cut judgments, while non-standardization ...

  9. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    Science.gov (United States)

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  10. Spontaneous Speech Events in Two Speech Databases of Human-Computer and Human-Human Dialogs in Spanish

    Science.gov (United States)

    Rodriguez, Luis J.; Torres, M. Ines

    2006-01-01

    Previous works in English have revealed that disfluencies follow regular patterns and that incorporating them into the language model of a speech recognizer leads to lower perplexities and sometimes to a better performance. Although work on disfluency modeling has been applied outside the English community (e.g., in Japanese), as far as we know…

  11. Omission of definite and indefinite articles in the spontaneous speech of agrammatic speakers with Broca's aphasia

    NARCIS (Netherlands)

    Havik, E.; Bastiaanse, Y.R.M.

    2004-01-01

    Background: Cross-linguistic investigation of agrammatic speech in speakers of different languages allows us to tests theoretical accounts of the nature of agrammatism. A significant feature of the speech of many agrammatic speakers is a problem with article production. Mansson and Ahlsen (2001)

  12. Monitoring Progress in Vocal Development in Young Cochlear Implant Recipients: Relationships between Speech Samples and Scores from the Conditioned Assessment of Speech Production (CASP)

    Science.gov (United States)

    Ertmer, David J.; Jung, Jongmin

    2012-01-01

    Purpose: To determine the concurrent validity of the Conditioned Assessment of Speech Production (CASP; Ertmer & Stoel-Gammon, 2008) and data obtained from speech samples recorded at the same intervals. Method: Nineteen children who are deaf who received cochlear implants before their 3rd birthdays participated in the study. Speech samples and…

  13. Relations among questionnaire and experience sampling measures of inner speech: A smartphone app study

    Directory of Open Access Journals (Sweden)

    Ben eAlderson-Day

    2015-04-01

    Full Text Available Inner speech is often reported to be a common and central part of inner experience, but its true prevalence is unclear. Many questionnaire-based measures appear to lack convergent validity and it has been claimed that they overestimate inner speech in comparison to experience sampling methods (which involve collecting data at random timepoints. The present study compared self-reporting of inner speech collected via a general questionnaire and experience sampling, using data from a custom-made smartphone app (Inner Life. Fifty-one university students completed a generalized self-report measure of inner speech (the Varieties of Inner Speech Questionnaire, or VISQ and responded to at least 7 random alerts to report on incidences of inner speech over a 2-week period. Correlations and pairwise comparisons were used to compare generalized endorsements and randomly-sampled scores for each VISQ subscale. Significant correlations were observed between general and randomly sampled measures for only 2 of the 4 VISQ subscales, and endorsements of inner speech with evaluative or motivational characteristics did not correlate at all across different measures. Endorsement of inner speech items was significantly lower for random sampling compared to generalized self-report, for all VISQ subscales. Exploratory analysis indicated that specific inner speech characteristics were also related to anxiety and future-oriented thinking.

  14. [Spontaneous speech prosody and discourse analysis in schizophrenia and Fronto Temporal Dementia (FTD) patients].

    Science.gov (United States)

    Martínez, Angela; Felizzola Donado, Carlos Alberto; Matallana Eslava, Diana Lucía

    2015-01-01

    Patients with schizophrenia and Frontotemporal Dementia (FTD) in their linguistic variants share some language characteristics such as the lexical access difficulties, disordered speech with disruptions, many pauses, interruptions and reformulations. For the schizophrenia patients it reflects a difficulty of affect expression, while for the FTD patients it reflects a linguistic issue. This study, through an analysis of a series of cases assessed Clinic both in memory and on the Mental Health Unit of HUSI-PUJ (Hospital Universitario San Ignacio), with additional language assessment (analysis speech and acoustic analysis), present distinctive features of the DFT in its linguistic variants and schizophrenia that will guide the specialist in finding early markers of a differential diagnosis. In patients with FTD language variants, in 100% of cases there is a difficulty understanding linguistic structure of complex type; and important speech fluency problems. In patients with schizophrenia, there are significant alterations in the expression of the suprasegmental elements of speech, as well as disruptions in discourse. We present how depth language assessment allows to reassess some of the rules for the speech and prosody analysis of patients with dementia and schizophrenia; we suggest how elements of speech are useful in guiding the diagnosis and correlate functional compromise in everyday psychiatrist's practice. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  15. Functionally Equivalent Variants in a Non-standard Variety and Their Implications for Universal Grammar: A Spontaneous Speech Corpus

    Directory of Open Access Journals (Sweden)

    Evelina Leivada

    2017-07-01

    Full Text Available Findings from the field of experimental linguistics have shown that a native speaker may judge a variant that is part of her grammar as unacceptable, but still use it productively in spontaneous speech. The process of eliciting acceptability judgments from speakers of non-standard languages is sometimes clouded by factors akin to prescriptive notions of grammatical correctness. It has been argued that standardization enhances the ability to make clear-cut judgments, while non-standardization may result to grammatical hybridity, often manifested in the form of functionally equivalent variants in the repertoire of a single speaker. Recognizing the importance of working with corpora of spontaneous speech, this work investigates patterns of variation in the spontaneous production of five neurotypical, adult speakers of a non-standard variety in terms of three variants, each targeting one level of linguistic analysis: syntax, morphology, and phonology. The results reveal the existence of functionally equivalent variants across speakers and levels of analysis. We first discuss these findings in relation to the notions of competing, mixed, and fused grammars, and then we flesh out the implications that different values of the same variant carry for parametric approaches to Universal Grammar. We observe that intraspeaker realizations of different values of the same variant within the same syntactic environment are incompatible with the ‘triggering-a-single-value’ approach of parametric models, but we argue that they are compatible with the concept of Universal Grammar itself. Since the analysis of these variants is ultimately a way of investigating the status of Universal Grammar primitives, we conclude that claims about the alleged unfalsifiability of (the contents of Universal Grammar are unfounded.

  16. Functionally Equivalent Variants in a Non-standard Variety and Their Implications for Universal Grammar: A Spontaneous Speech Corpus.

    Science.gov (United States)

    Leivada, Evelina; Papadopoulou, Elena; Pavlou, Natalia

    2017-01-01

    Findings from the field of experimental linguistics have shown that a native speaker may judge a variant that is part of her grammar as unacceptable, but still use it productively in spontaneous speech. The process of eliciting acceptability judgments from speakers of non-standard languages is sometimes clouded by factors akin to prescriptive notions of grammatical correctness. It has been argued that standardization enhances the ability to make clear-cut judgments, while non-standardization may result to grammatical hybridity, often manifested in the form of functionally equivalent variants in the repertoire of a single speaker. Recognizing the importance of working with corpora of spontaneous speech, this work investigates patterns of variation in the spontaneous production of five neurotypical, adult speakers of a non-standard variety in terms of three variants, each targeting one level of linguistic analysis: syntax, morphology, and phonology. The results reveal the existence of functionally equivalent variants across speakers and levels of analysis. We first discuss these findings in relation to the notions of competing, mixed, and fused grammars, and then we flesh out the implications that different values of the same variant carry for parametric approaches to Universal Grammar. We observe that intraspeaker realizations of different values of the same variant within the same syntactic environment are incompatible with the 'triggering-a-single-value' approach of parametric models, but we argue that they are compatible with the concept of Universal Grammar itself. Since the analysis of these variants is ultimately a way of investigating the status of Universal Grammar primitives, we conclude that claims about the alleged unfalsifiability of (the contents of) Universal Grammar are unfounded.

  17. The retrieval and inflection of verbs in the spontaneous speech of fluent aphasic speakers

    NARCIS (Netherlands)

    Bastiaanse, Y.R.M.

    Fluent aphasia of the anomic and Wernicke's type is characterized by word retrieval difficulties. However, in fluent aphasic speech, grammatical deviations have been observed as well. There is debate as to whether these grammatical problems are caused by the word retrieval deficit, by an additional

  18. Speech Disorders in Neurofibromatosis Type 1: A Sample Survey

    Science.gov (United States)

    Cosyns, Marjan; Vandeweghe, Lies; Mortier, Geert; Janssens, Sandra; Van Borsel, John

    2010-01-01

    Background: Neurofibromatosis type 1 (NF1) is an autosomal-dominant neurocutaneous disorder with an estimated prevalence of two to three cases per 10 000 population. While the physical characteristics have been well documented, speech disorders have not been fully characterized in NF1 patients. Aims: This study serves as a pilot to identify key…

  19. Polynomial Modeling of Child and Adult Intonation in German Spontaneous Speech

    Science.gov (United States)

    de Ruiter, Laura E.

    2011-01-01

    In a data set of 291 spontaneous utterances from German 5-year-olds, 7-year-olds and adults, nuclear pitch contours were labeled manually using the GToBI annotation system. Ten different contour types were identified.The fundamental frequency (F0) of these contours was modeled using third-order orthogonal polynomials, following an approach similar…

  20. Lexical Access in Persian Normal Speakers: Picture Naming, Verbal Fluency and Spontaneous Speech

    Directory of Open Access Journals (Sweden)

    Zahra Sadat Ghoreishi

    2014-06-01

    Full Text Available Objectives: Lexical access is the process by which the basic conceptual, syntactical and morpho-phonological information of words are activated. Most studies of lexical access have focused on picture naming. There is hardly any previous research on other parameters of lexical access such as verbal fluency and analysis of connected speech in Persian normal participants. This study investigates the lexical access performance in normal speakers in different issues such as age, sex and education. Methods: The performance of 120 adult Persian speakers in three tasks including picture naming, verbal fluency and connected speech, was examined using "Persian Lexical Access Assessment Package”. The performance of participants between two gender groups (male/female, three education groups (below 5 years, above 12 years, between 5 and 12 years and three age groups (18-35 years, 36-55 years, 56-75 years were compared. Results: According to findings, picture naming increased with increasing education and decreased with increasing age. The performance of participants in phonological and semantic verbal fluency showed improvement with age and education. No significant difference was seen between males and females in verbal fluency task. In the analysis of connected speech there were no significant differences between different age and education groups and just mean length of utterance in males was significantly higher than females. Discussion: The findings could be a primitive scale for comparison between normal subjects and patients in lexical access tasks, furthermore it could be a horizon for planning of treatment goals in patients with word finding problem according to age, gender and education.

  1. Speech acts: sampling the social construction of mental retardation in everyday life.

    Science.gov (United States)

    Danforth, S; Navarro, V

    1998-02-01

    A sample of speech acts in everyday discourse referring to persons or events having to do with the term mental retardation was analyzed in order to investigate the belief that language use both constructs and reflects cultural norms that define the social roles of persons reduced to object status through categorical membership. Speech acts gathered suggest four emergent themes: the discourse of category membership, the dichotomy of normal and abnormal, issues of place and space, and fear. These themes were explicated from a social constructionist perspective, displaying the way speech acts construct mental retardation and subvert individuals with the label into demeaned and ridiculed objects of cultural fear.

  2. Feasibility of automated speech sample collection with stuttering children using interactive voice response (IVR) technology.

    Science.gov (United States)

    Vogel, Adam P; Block, Susan; Kefalianos, Elaina; Onslow, Mark; Eadie, Patricia; Barth, Ben; Conway, Laura; Mundt, James C; Reilly, Sheena

    2015-04-01

    To investigate the feasibility of adopting automated interactive voice response (IVR) technology for remotely capturing standardized speech samples from stuttering children. Participants were 10 6-year-old stuttering children. Their parents called a toll-free number from their homes and were prompted to elicit speech from their children using a standard protocol involving conversation, picture description and games. The automated IVR system was implemented using an off-the-shelf telephony software program and delivered by a standard desktop computer. The software infrastructure utilizes voice over internet protocol. Speech samples were automatically recorded during the calls. Video recordings were simultaneously acquired in the home at the time of the call to evaluate the fidelity of the telephone collected samples. Key outcome measures included syllables spoken, percentage of syllables stuttered and an overall rating of stuttering severity using a 10-point scale. Data revealed a high level of relative reliability in terms of intra-class correlation between the video and telephone acquired samples on all outcome measures during the conversation task. Findings were less consistent for speech samples during picture description and games. Results suggest that IVR technology can be used successfully to automate remote capture of child speech samples.

  3. Discourse Intonation and Information Structure: An Empirical Study of Existential There Constructions in Non-native Spontaneous Speech

    Directory of Open Access Journals (Sweden)

    Nagy Judit

    2016-12-01

    Full Text Available The management of given and new information is one of the key components of accomplishing coherence in oral discourse, which is claimed to be a problematic area for language learners (Celce-Murcia, Dörnyei, and Thurrell 1995: 14. Research on discourse intonation proposes that instead of the given/new dichotomy, givenness should be viewed as a continuum, with different types of accessibility (Baumann & Grice 2006. Moreover, Prince (1992 previously categorized information structure into Hearer-old/Hearer-new and Discourse-old/Discourse-new information. There is consensus on the fact that focus or prominence associated with new information is marked with nuclear pitch accent, and its main acoustic cue, fundamental frequency (f0 (Ward & Birner 2001: 120. Non-native intonation has been reported to display numerous differences in f0 range and patterns compared to native speech (Wennerstrom 1994; Baker 2010. This study is an attempt to address the issue of marking information structure in existential there sentences by means of f0 in non-native spontaneous speech. Data originates from task-based interactions in the Wildcat Corpus of Native- and Foreign-Accented English (Van Engen et al. 2010. This paper examines two issues: (1 information structure in relation to the notions of givenness and different types of accessibility (Baumann & Grice 2006 and to Prince’s (1992 multidimensional taxonomy and (2 the use of f0 peaks to mark the prominence of new information. Several differences were measured among native speakers regarding the use of f0, sentence type, and complexity.

  4. A multimodal dataset of spontaneous speech and movement production on object affordances.

    Science.gov (United States)

    Vatakis, Argiro; Pastra, Katerina

    2016-01-19

    In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of 'thinking aloud', spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.

  5. Multichannel infinite clipping as a form of sampling of speech signals

    International Nuclear Information System (INIS)

    Guidarelli, G.

    1985-01-01

    A remarkable improvement of both intelligibility and naturalness of infinitely clipped speech can be achieved by means of a multichannel system in which the speech signal is splitted into several band-pass channels before the clipping and successively reconstructed by summing up the clipped outputs of each channel. A possible explanation of such an improvement is given, founded on the so-called zero-based representation of band limited signals where the zero-crossings sequence is considered as a set of samples of the signal

  6. Do long-term tongue piercings affect speech quality?

    Science.gov (United States)

    Heinen, Esther; Birkholz, Peter; Willmes, Klaus; Neuschaefer-Rube, Christiane

    2017-10-01

    To explore possible effects of tongue piercing on perceived speech quality. Using a quasi-experimental design, we analyzed the effect of tongue piercing on speech in a perception experiment. Samples of spontaneous speech and read speech were recorded from 20 long-term pierced and 20 non-pierced individuals (10 males, 10 females each). The individuals having a tongue piercing were recorded with attached and removed piercing. The audio samples were blindly rated by 26 female and 20 male laypersons and by 5 female speech-language pathologists with regard to perceived speech quality along 5 dimensions: speech clarity, speech rate, prosody, rhythm and fluency. We found no statistically significant differences for any of the speech quality dimensions between the pierced and non-pierced individuals, neither for the read nor for the spontaneous speech. In addition, neither length nor position of piercing had a significant effect on speech quality. The removal of tongue piercings had no effects on speech performance either. Rating differences between laypersons and speech-language pathologists were not dependent on the presence of a tongue piercing. People are able to perfectly adapt their articulation to long-term tongue piercings such that their speech quality is not perceptually affected.

  7. Parasitological stool sample exam by spontaneous sedimentation method using conical tubes: effectiveness, practice, and biosafety

    Directory of Open Access Journals (Sweden)

    Steveen Rios Ribeiro

    2012-06-01

    Full Text Available INTRODUCTION: Spontaneous sedimentation is an important procedure for stool examination. A modification of this technique using conical tubes was performed and evaluated. METHODS: Fifty fecal samples were processed in sedimentation glass and in polypropylene conical tubes. Another 50 samples were used for quantitative evaluation of protozoan cysts. RESULTS: Although no significant differences occurred in the frequency of protozoa and helminths detected, significant differences in protozoan cyst counts did occur. CONCLUSIONS: The use of tube predicts a shorter path in the sedimentation of the sample, increases concentration of parasites for microscopy analysis, minimizes the risks of contamination, reduces the odor, and optimizes the workspace.

  8. Minimal Pair Distinctions and Intelligibility in Preschool Children with and without Speech Sound Disorders

    Science.gov (United States)

    Hodge, Megan M.; Gotzke, Carrie L.

    2011-01-01

    Listeners' identification of young children's productions of minimally contrastive words and predictive relationships between accurately identified words and intelligibility scores obtained from a 100-word spontaneous speech sample were determined for 36 children with typically developing speech (TDS) and 36 children with speech sound disorders…

  9. Characterizing Intonation Deficit in Motor Speech Disorders: An Autosegmental-Metrical Analysis of Spontaneous Speech in Hypokinetic Dysarthria, Ataxic Dysarthria, and Foreign Accent Syndrome

    Science.gov (United States)

    Lowit, Anja; Kuschmann, Anja

    2012-01-01

    Purpose: The autosegmental-metrical (AM) framework represents an established methodology for intonational analysis in unimpaired speaker populations but has found little application in describing intonation in motor speech disorders (MSDs). This study compared the intonation patterns of unimpaired participants (CON) and those with Parkinson's…

  10. Semiparametric model and inference for spontaneous abortion data with a cured proportion and biased sampling.

    Science.gov (United States)

    Piao, Jin; Ning, Jing; Chambers, Christina D; Xu, Ronghui

    2018-01-01

    Evaluating and understanding the risk and safety of using medications for autoimmune disease in a woman during her pregnancy will help both clinicians and pregnant women to make better treatment decisions. However, utilizing spontaneous abortion (SAB) data collected in observational studies of pregnancy to derive valid inference poses two major challenges. First, the data from the observational cohort are not random samples of the target population due to the sampling mechanism. Pregnant women with early SAB are more likely to be excluded from the cohort, and there may be substantial differences between the observed SAB time and those in the target population. Second, the observed data are heterogeneous and contain a "cured" proportion. In this article, we consider semiparametric models to simultaneously estimate the probability of being cured and the distribution of time to SAB for the uncured subgroup. To derive the maximum likelihood estimators, we appropriately adjust the sampling bias in the likelihood function and develop an expectation-maximization algorithm to overcome the computational challenge. We apply the empirical process theory to prove the consistency and asymptotic normality of the estimators. We examine the finite sample performance of the proposed estimators in simulation studies and illustrate the proposed method through an application to SAB data from pregnant women. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Speech Analysis of Bengali Speaking Children with Repaired Cleft Lip & Palate

    Science.gov (United States)

    Chakrabarty, Madhushree; Kumar, Suman; Chatterjee, Indranil; Maheshwari, Neha

    2012-01-01

    The present study aims at analyzing speech samples of four Bengali speaking children with repaired cleft palates with a view to differentiate between the misarticulations arising out of a deficit in linguistic skills and structural or motoric limitations. Spontaneous speech samples were collected and subjected to a number of linguistic analyses…

  12. Assessing the reliability of the five minute speech sample against the Camberwell family interview in a chronic fatigue syndrome sample.

    Science.gov (United States)

    Band, Rebecca; Chadwick, Ella; Hickman, Hannah; Barrowclough, Christine; Wearden, Alison

    2016-05-01

    The current study aimed to examine the reliability of the Five Minute Speech Sample (FMSS) for assessing relative Expressed Emotion (EE) compared with the Camberwell Family Interview (CFI) in a sample of relatives of adult patients with Chronic Fatigue Syndrome (CFS). 21 relatives were recruited and completed both assessments. The CFI was conducted first for all participants, with the FMSS conducted approximately one month later. Trained raters independently coded both EE measures; high levels of rating reliability were established for both measures. Comparisons were conducted for overall EE status, emotional over-involvement (EOI) and criticism. The distribution of high and low-EE was equivalent across the two measures, with the FMSS correctly classifying EE is 71% of cases (n=15). The correspondence between the FMSS and CFI ratings was found to be non-significant for all categorical variables. However, the number of critical comments made by relatives during the FMSS significantly correlated with the number of critical comments made during the CFI. The poorest correspondence between the measures was observed for the EOI dimension. The findings suggest that the FMSS may be a useful screening tool for identifying high-EE, particularly criticism, within a sample of relatives of patients with CFS. However, the two measures should not be assumed equivalent, and the CFI should be used where possible, particularly with respect to understanding EOI. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Consonant and Syllable Structure Patterns in Childhood Apraxia of Speech: Developmental Change in Three Children

    Science.gov (United States)

    Jacks, Adam; Marquardt, Thomas P.; Davis, Barbara L.

    2006-01-01

    Changes in consonant and syllable-level error patterns of three children diagnosed with childhood apraxia of speech (CAS) were investigated in a 3-year longitudinal study. Spontaneous speech samples were analyzed to assess the accuracy of consonants and syllables. Consonant accuracy was low overall, with most frequent errors on middle- and…

  14. Do Native Speakers of North American and Singapore English Differentially Perceive Comprehensibility in Second Language Speech?

    Science.gov (United States)

    Saito, Kazuya; Shintani, Natsuko

    2016-01-01

    The current study examined the extent to which native speakers of North American and Singapore English differentially perceive the comprehensibility (ease of understanding) of second language (L2) speech. Spontaneous speech samples elicited from 50 Japanese learners of English with various proficiency levels were first rated by 10 Canadian and 10…

  15. Fostering Spontaneous Visual Attention in Children on the Autism Spectrum: A Proof-of-Concept Study Comparing Singing and Speech.

    Science.gov (United States)

    Thompson, Grace Anne; Abel, Larry Allen

    2018-01-22

    Children on the autism spectrum are reported to have lower rates of social gaze as early as toddlerhood, and this pattern persists across the lifespan. Finding ways to promote more natural and spontaneous engagement in social interactions may help to boost developmental opportunities in the child's home and community settings. This proof-of-concept study hypothesized that a video of a singer would elicit more attention to the performer, particularly to her face, than a video of her reading a story, and that the child's familiarity with the material would enhance attention. Sixteen children on the autism spectrum (7-10 years old) watched 4 videos 1 min long comprising a favorite song or story, and an unfamiliar song and story. Eye movements were recorded, and three-way repeated measures ANOVAs examined the proportion of total valid visual dwell time and fixations, in each trial and each target area. For proportion of both dwell time and fixation counts, children were significantly more likely to look at the performer's face and body and less at the prop during singing than story-telling and when familiar rather than unfamiliar material was presented. These findings raise important issues for supporting children to naturally initiate looking toward a person's face. Autism Res 2018. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Children on the autism spectrum may have difficulty looking at people, particularly their faces. In this study, children watched videos of someone singing or reading a story. The results show that children look more at the person if they were singing and if the story was familiar to them. Using songs and familiar stories may be a way to help children with autism to naturally engage with others. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.

  16. Social and Cognitive Impressions of Adults Who Do and Do Not Stutter Based on Listeners' Perceptions of Read-Speech Samples

    Directory of Open Access Journals (Sweden)

    Lauren J. Amick

    2017-07-01

    Full Text Available Stuttering is a neurodevelopmental disorder characterized by frequent and involuntary disruptions during speech production. Adults who stutter are often subject to negative perceptions. The present study examined whether negative social and cognitive impressions are formed when listening to speech, even without any knowledge about the speaker. Two experiments were conducted in which naïve participants were asked to listen to and provide ratings on samples of read speech produced by adults who stutter and typically-speaking adults without knowledge about the individuals who produced the speech. In both experiments, listeners rated speaker cognitive ability, likeability, anxiety, as well as a number of speech characteristics that included fluency, naturalness, intelligibility, the likelihood the speaker had a speech-and-language disorder (Experiment 1 only, rate and volume (both Experiments 1 and 2. The speech of adults who stutter was perceived to be less fluent, natural, intelligible, and to be slower and louder than the speech of typical adults. Adults who stutter were also perceived to have lower cognitive ability, to be less likeable and to be more anxious than the typical adult speakers. Relations between speech characteristics and social and cognitive impressions were found, independent of whether or not the speaker stuttered (i.e., they were found for both adults who stutter and typically-speaking adults and did not depend on being cued that some of the speakers may have had a speech-language impairment.

  17. Developmental profile of speech-language and communicative functions in an individual with the preserved speech variant of Rett syndrome.

    Science.gov (United States)

    Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2014-08-01

    We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.

  18. Apraxia of Speech: Perceptual Analysis of Trisyllabic Word Productions across Repeated Sampling Occasions

    Science.gov (United States)

    Mauszycki, Shannon C.; Wambaugh, Julie L.; Cameron, Rosalea M.

    2012-01-01

    Purpose: Early apraxia of speech (AOS) research has characterized errors as being variable, resulting in a number of different error types being produced on repeated productions of the same stimuli. Conversely, recent research has uncovered greater consistency in errors, but there are limited data examining sound errors over time (more than one…

  19. Características iniciais da comunicação verbal de pré-escolares com Alterações Específicas do Desenvolvimento da Linguagem em fala espontânea Primary characteristics of the verbal communication of preschoolers with Specific Language Impairment in spontaneous speech

    Directory of Open Access Journals (Sweden)

    Debora Maria Befi-Lopes

    2010-01-01

    Full Text Available OBJETIVO: Verificar desempenho fonológico de pré-escolares com Alterações Específicas do Desenvolvimento da Linguagem (AEDL em fala espontânea. MÉTODOS: Foram sujeitos 27 crianças com AEDL, entre três anos e cinco anos e 11 meses, em tratamento fonoaudiológico. Foram selecionados aqueles que realizaram ao menos 50% da avaliação da fonologia a partir de provas de nomeação e imitação de palavras, ou que apresentaram inteligibilidade de fala passível de análise. Foram coletadas amostras de fala na prova de pragmática e no discurso eliciado por figuras. Foram realizadas análises a partir da utilização de processos fonológicos do desenvolvimento de linguagem (PD e idiossincráticos (PI. RESULTADOS: A estatística descritiva (médias de PD e PI indicou grande variabilidade intra-grupos. Não houve variação em número de processos conforme a idade (PD: p=0,38; PI: p=0,72, porém houve predominância de PD em todas as idades, nas duas provas aplicadas (Z=-6,327; pPURPOSE: To verify the phonological performance of preschoolers with Specific Language Impairment (SLI in spontaneous speech. METHODS: The subjects were 27 children with SLI with ages between three years and five years and 11 months, who attended Speech-Language Pathology therapy. The subjects who carried out at least 50% of the phonological assessment or who had speech intelligibility that allowed analysis were selected. Speech samples were obtained from a pragmatics evaluation and from elicited discourse. Analyses considered the use of developmental (DP and idiossyncratic phonological processes (IP in spontaneous speech. RESULTS: The descriptive statistics (mean DP and IP showed large within-group variability. There was no variation in the number of processes according to age (DP: p=0.38; IP: p=0.72, but there was a prevalence of DP in all ages, in both tests (Z=-6.327; p<0.001. The occurrence of DP and IP was higher in the pragmatics evaluation (p<0.001, situation in

  20. Stabiliteit spontane taal bij chronische milde afasie

    NARCIS (Netherlands)

    Wolthuis, Nienke; Mendez Orellana, Carolina; Nouwens, Femke; Jonkers, Roel; Visch-Brink, Evy; Bastiaanse, Roelien

    2014-01-01

    In aphasia, an analysis of spontaneous speech provides opportunities to establish the linguistic and communicative abilities, to create suitable therapy plans and to measure language progress. The current study investigated the stability of spontaneous speech within an interview of ten mild aphasic

  1. Machine Translation from Speech

    Science.gov (United States)

    Schwartz, Richard; Olive, Joseph; McCary, John; Christianson, Caitlin

    This chapter describes approaches for translation from speech. Translation from speech presents two new issues. First, of course, we must recognize the speech in the source language. Although speech recognition has improved considerably over the last three decades, it is still far from being a solved problem. In the best of conditions, when the speech comes from high quality, carefully enunciated speech, on common topics (such as speech read by a trained news broadcaster), the word error rate is typically on the order of 5%. Humans can typically transcribe speech like this with less than 1% disagreement between annotators, so even this best number is still far worse than human performance. However, the task gets much harder when anything changes from this ideal condition. Some of the conditions that cause higher error rate are, if the topic is somewhat unusual, or the speakers are not reading so that their speech is more spontaneous, or if the speakers have an accent or are speaking a dialect, or if there is any acoustic degradation, such as noise or reverberation. In these cases, the word error can increase significantly to 20%, 30%, or higher. Accordingly, most of this chapter discusses techniques for improving speech recognition accuracy, while one section discusses techniques for integrating speech recognition with translation.

  2. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  3. Can transcutaneous carbon dioxide pressure be a surrogate of blood gas samples for spontaneously breathing emergency patients? The ERNESTO experience.

    Science.gov (United States)

    Peschanski, Nicolas; Garcia, Léa; Delasalle, Emilie; Mzabi, Lynda; Rouff, Edwin; Dautheville, Sandrine; Renai, Fayrouz; Kieffer, Yann; Lefevre, Guillaume; Freund, Yonathan; Ray, Patrick

    2016-05-01

    It is known that the arterial carbon dioxide pressure (PaCO2) is useful for emergency physicians to assess the severity of dyspnoeic spontaneously breathing patients. Transcutaneous carbon dioxide pressure (PtcCO2) measurements could be a non-invasive alternative to PaCO2 measurements obtained by blood gas samples, as suggested in previous studies. This study evaluates the reliability of a new device in the emergency department (ED). We prospectively included patients presenting to the ED with respiratory distress who were breathing spontaneously or under non-invasive ventilation. We simultaneously performed arterial blood gas measurements and measurement of PtcCO2 using a sensor placed either on the forearm or the side of the chest and connected to the TCM4 CombiM device. The agreement between PaCO2 and PtcCO2 was assessed using the Bland-Altman method. Sixty-seven spontaneously breathing patients were prospectively included (mean age 70 years, 52% men) and 64 first measurements of PtcCO2 (out of 67) were analysed out of the 97 performed. Nineteen patients (28%) had pneumonia, 19 (28%) had acute heart failure and 19 (28%) had an exacerbation of chronic obstructive pulmonary disease. Mean PaCO2 was 49 mm Hg (range 22-103). The mean difference between PaCO2 and PtcCO2 was 9 mm Hg (range -47 to +54) with 95% limits of agreement of -21.8 mm Hg and 39.7 mm Hg. Only 36.3% of the measurement differences were within 5 mm Hg. Our results show that PtcCO2 measured by the TCM4 device could not replace PaCO2 obtained by arterial blood gas analysis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Identification of a novel interspecific hybrid yeast from a metagenomic spontaneously inoculated beer sample using Hi-C.

    Science.gov (United States)

    Smukowski Heil, Caiti; Burton, Joshua N; Liachko, Ivan; Friedrich, Anne; Hanson, Noah A; Morris, Cody L; Schacherer, Joseph; Shendure, Jay; Thomas, James H; Dunham, Maitreya J

    2018-01-01

    Interspecific hybridization is a common mechanism enabling genetic diversification and adaptation; however, the detection of hybrid species has been quite difficult. The identification of microbial hybrids is made even more complicated, as most environmental microbes are resistant to culturing and must be studied in their native mixed communities. We have previously adapted the chromosome conformation capture method Hi-C to the assembly of genomes from mixed populations. Here, we show the method's application in assembling genomes directly from an uncultured, mixed population from a spontaneously inoculated beer sample. Our assembly method has enabled us to de-convolute four bacterial and four yeast genomes from this sample, including a putative yeast hybrid. Downstream isolation and analysis of this hybrid confirmed its genome to consist of Pichia membranifaciens and that of another related, but undescribed, yeast. Our work shows that Hi-C-based metagenomic methods can overcome the limitation of traditional sequencing methods in studying complex mixtures of genomes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Speech Acts during Friends' and Non-Friends' Spontaneous Conversations in Preschool Dyads with High-Functioning Autism Spectrum Disorder versus Typical Development

    Science.gov (United States)

    Bauminger-Zviely, Nirit; Golan-Itshaky, Adi; Tubul-Lavy, Gila

    2017-01-01

    In this study, we videotaped two 10-min. free-play interactions and coded speech acts (SAs) in peer talk of 51 preschoolers (21 ASD, 30 typical), interacting with friend versus non-friend partners. Groups were matched for maternal education, IQ (verbal/nonverbal), and CA. We compared SAs by group (ASD/typical), by partner's friendship status…

  6. Progressive apraxia of speech as a window into the study of speech planning processes.

    Science.gov (United States)

    Laganaro, Marina; Croisier, Michèle; Bagou, Odile; Assal, Frédéric

    2012-09-01

    We present a 3-year follow-up study of a patient with progressive apraxia of speech (PAoS), aimed at investigating whether the theoretical organization of phonetic encoding is reflected in the progressive disruption of speech. As decreased speech rate was the most striking pattern of disruption during the first 2 years, durational analyses were carried out longitudinally on syllables excised from spontaneous, repetition and reading speech samples. The crucial result of the present study is the demonstration of an effect of syllable frequency on duration: the progressive disruption of articulation rate did not affect all syllables in the same way, but followed a gradient that was function of the frequency of use of syllable-sized motor programs. The combination of data from this case of PAoS with previous psycholinguistic and neurolinguistic data, points to a frequency organization of syllable-sized speech-motor plans. In this study we also illustrate how studying PAoS can be exploited in theoretical and clinical investigations of phonetic encoding as it represents a unique opportunity to investigate speech while it progressively disrupts. Copyright © 2011 Elsevier Srl. All rights reserved.

  7. Prosodic Contrasts in Ironic Speech

    Science.gov (United States)

    Bryant, Gregory A.

    2010-01-01

    Prosodic features in spontaneous speech help disambiguate implied meaning not explicit in linguistic surface structure, but little research has examined how these signals manifest themselves in real conversations. Spontaneously produced verbal irony utterances generated between familiar speakers in conversational dyads were acoustically analyzed…

  8. An SSNTD study of spontaneous fission fragments from the soil-gas samples of Bakreswar thermal springs

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Debasish; Ghose, Debasis; Sastri, R.C. E-mail: res@juphys.ernet.in

    2001-04-01

    During the course of investigations on the radon and stable gas migration around the hot spring zone at Bakreswar, Birbhum in India, it was noticed that CR-39 plastic detectors used for the detection of radon revealed tracks with much bigger diameters than usual for alpha particle tracks. Exposed CR-39 detectors etched adapting sequential etching technique confirmed the presence of bigger diameter tracks similar in nature to the tracks formed by spontaneous fission fragments. This paper presents the results of these observations along with the histogram plots of the track number versus track diameter that indicate an asymmetric distribution as was seen for mass distribution of spontaneous fission fragments.

  9. Speech disfluencies in children with Down Syndrome.

    Science.gov (United States)

    Eggers, Kurt; Van Eerdenbrugh, Sabine

    Speech and language development in individuals with Down syndrome is often delayed and/or disordered and speech disfluencies appear to be more common. These disfluencies have been labeled over time as stuttering, cluttering or both. were usually generated from studies with adults or a mixed age group, quite often using different methodologies, making it difficult to compare findings. Therefore, the purpose of this study was to analyze and describe the speech disfluencies of a group, only consisting of children with Down Syndrome between 3 and 13 years of age. Participants consisted of 26 Dutch-speaking children with DS. Spontaneous speech samples were collected and 50 utterances were analyzed for each child. Types of disfluencies were identified and classified into stuttering-like (SLD) and other disfluencies (OD). The criterion of three or more SLD per 100 syllables (cf. Ambrose & Yairi, 1999) was used to identify stuttering. Additional parameters such as mean articulation rate (MAR), ratio of disfluencies, and telescoping (cf. Coppens-Hofman et al., 2013) were used to identify cluttering and to differentiate between stuttering and cluttering. Approximately 30 percent of children with DS between 3 and 13 years of age in this study stutter, which is much higher than the prevalence in normally developing children. Moreover, this study showed that the speech of children with DS has a different distribution of types of disfluencies than the speech of normally developing children. Although different cluttering-like characteristics were found in the speech of young children with DS, none of them could be identified as cluttering or cluttering-stuttering. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Perceived liveliness and speech comprehensibility in aphasia : the effects of direct speech in auditory narratives

    NARCIS (Netherlands)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in 'healthy' communication direct speech constructions contribute to the liveliness, and indirectly to

  11. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  12. The determination of 210Pb, 210Bi, 210Po by counting gross α and gross β rate of spontaneous deposited samples on Ni foil

    International Nuclear Information System (INIS)

    Wang Yuxue; Guo Dongfa; Huang Qiuhong

    2012-01-01

    The optimum spontaneous deposition conditions of 210 Bi and 210 Po on Ni foil is studied in this paper, and a simultaneous or continuous testing method of 210 Pb, 210 Bi, 210 Po in samples by counting gross α and gross β rate of spontaneous deposited samples on Ni foil is set up. The research results show that under the conditions of the Ni foil area being 3.14 cm 2 , the concentration of HCl being 1.0 mol/L, the volume of HCl being 25 mL, the constant experiment temperature being 90℃, the vibration frequency being 180/min, the vibration amplitude being 20 mm and the spontaneous deposition time being 60 min, 210 Bi and 210 Po on Ni foil can be simultaneously and quantitatively deposited. The linear correlation coefficient between 210 Po activity and its α-counting rate is 0.9998, 210 Bi activity and its β-counting rate is 0.9997. The effects of 210 Bi and 210 Po short half-time radioisotopes on testing will decrease while standing the Ni foil for a certain length of time before measuring, in case of hydrazine hydrochloride and tartaric acid presence, lots of coexisting elements do not produce interference to testing. Degree of precision of this testing technology si higher than 5%, total recovery rate reaches 99.5%∼100.5%. (authors)

  13. Speech Entrainment Compensates for Broca's Area Damage

    Science.gov (United States)

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-01-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to speech entrainment. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during speech entrainment versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of speech entrainment to improve speech production and may help select patients for speech entrainment treatment. PMID:25989443

  14. A Foreign Speech Accent in a Case of Conversion Disorder

    Directory of Open Access Journals (Sweden)

    Jo Verhoeven

    2005-01-01

    Full Text Available Objective: The aim of this paper is to report the psychiatric, neuroradiological and linguistic characteristics in a native speaker of Dutch who developed speech symptoms which strongly resemble Foreign Accent Syndrome. Background: Foreign Accent Syndrome is a rare speech production disorder in which the speech of a patient is perceived as foreign by speakers of the same speech community. This syndrome is generally related to focal brain damage. Only in few reported cases the Foreign Accent Syndrome is assumed to be of psychogenic and/or psychotic origin. Method: In addition to clinical and neuroradiological examinations, an extensive test battery of standardized neuropsychological and neurolinguistic investigations was carried out. Two samples of the patient's spontaneous speech were analysed and compared to a 500,000-words reference corpus of 160 normal native speakers of Dutch. Results: The patient had a prominent French accent in her pronunciation of Dutch. This accent had persisted over the past eight years and has become progressively stronger. The foreign qualities of her speech did not only relate to pronunciation, but also to the lexicon, syntax and pragmatics. Structural as well as functional neuroimaging did not reveal evidence that could account for the behavioural symptoms. By contrast psychological investigations indicated conversion disorder. Conclusions: To the best of our knowledge this is the first reported case of a foreign accent like syndrome in conversion disorder.

  15. A Foreign Speech Accent in a Case of Conversion Disorder

    Science.gov (United States)

    Verhoeven, Jo; Mariën, Peter; Engelborghs, Sebastiaan; D’Haenen, Hugo; De Deyn, Peter

    2005-01-01

    Objective: The aim of this paper is to report the psychiatric, neuroradiological and linguistic characteristics in a native speaker of Dutch who developed speech symptoms which strongly resemble Foreign Accent Syndrome. Background: Foreign Accent Syndrome is a rare speech production disorder in which the speech of a patient is perceived as foreign by speakers of the same speech community. This syndrome is generally related to focal brain damage. Only in few reported cases the Foreign Accent Syndrome is assumed to be of psychogenic and/or psychotic origin. Method: In addition to clinical and neuroradiological examinations, an extensive test battery of standardized neuropsychological and neurolinguistic investigations was carried out. Two samples of the patient's spontaneous speech were analysed and compared to a 500,000-words reference corpus of 160 normal native speakers of Dutch. Results: The patient had a prominent French accent in her pronunciation of Dutch. This accent had persisted over the past eight years and has become progressively stronger. The foreign qualities of her speech did not only relate to pronunciation, but also to the lexicon, syntax and pragmatics. Structural as well as functional neuroimaging did not reveal evidence that could account for the behavioural symptoms. By contrast psychological investigations indicated conversion disorder. Conclusions: To the best of our knowledge this is the first reported case of a foreign accent like syndrome in conversion disorder. PMID:16518013

  16. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  17. Pattern recognition in speech and language processing

    CERN Document Server

    Chou, Wu

    2003-01-01

    Minimum Classification Error (MSE) Approach in Pattern Recognition, Wu ChouMinimum Bayes-Risk Methods in Automatic Speech Recognition, Vaibhava Goel and William ByrneA Decision Theoretic Formulation for Adaptive and Robust Automatic Speech Recognition, Qiang HuoSpeech Pattern Recognition Using Neural Networks, Shigeru KatagiriLarge Vocabulary Speech Recognition Based on Statistical Methods, Jean-Luc GauvainToward Spontaneous Speech Recognition and Understanding, Sadaoki FuruiSpeaker Authentication, Qi Li and Biing-Hwang JuangHMMs for Language Processing Problems, Ri

  18. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  19. Semi-Automated Speech Transcription System Study

    Science.gov (United States)

    1994-08-31

    System) program and was trained on the Wall Street Journal task (described in [recogl], [recog2] and [recog3]). This speech recognizer is a time...quality of Wall Street Journal data (very high) and SWITCHBOARD data (poor), but also because the type of speech in broadcast data is also somewhere...between extremes of read text (the Wall Street Journal data) and spontaneous speech (SWITCHBOARD data). Dragon Systems’ SWITCHBOARD recognizer obtained a

  20. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  1. Accurate referential communication and its relation with private and social speech in a naturalistic context.

    Science.gov (United States)

    Girbau, Dolors; Boada, Humbert

    2004-11-01

    Research into human communication has been grouped under two traditions: referential and sociolinguistic. The study of a communication behavior simultaneously from both paradigms appears to be absent. Basically, this paper analyzes the use of private and social speech, through both a referential task (Word Pairs) and a naturalistic dyadic setting (Lego-set) administered to a sample of 64 children from grades 3 and 5. All children, of 8 and 10 years of age, used speech that was not adapted to the decoder, and thus ineffective for interpersonal communication, in both referential and sociolinguistic communication. Pairs of high-skill referential encoders used significantly more task-relevant social speech, that is, cognitively more complex, than did low-skill dyads in the naturalistic context. High-skill referential encoder dyads showed a trend to produce more inaudible private speech than did low-skill ones during spontaneous communication. Gender did not affect the results.

  2. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  3. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  4. Spontaneous pneumothorax

    Directory of Open Access Journals (Sweden)

    Davari R

    1996-07-01

    Full Text Available A case with bilateral spontaneous pneumothorax was presented. Etiology, mechanism, and treatment were discussed on the review of literature. Spontaneous Pneumothorax is a clinical entity resulting from a sudden non traumatic rupture of the lung. Biach reported in 1880 that 78% of 916 patients with spontaneous pneumothorax had tuberculosis. Kjergaard emphasized 1932 the primary importance of subpleural bleb disease. Currently the clinical spectrum of spontaneous pneumothorax seems to have entered a third era with the recognition of the interstitial lung disease and AIDS as a significant etiology. Standard treatment is including: observation, thoracocentesis, tube thoracostomy. Chemical pleurodesis, bullectomy or wedge resection of lung with pleural abrasion and occasionally pleurectomy. Little information has been reported regarding the efficacy of such treatment in spontaneous pneumothorax secondary to non bleb disease

  5. Effects of Sampling Context on Spontaneous Expressive Language in Males with Fragile X Syndrome or Down Syndrome

    Science.gov (United States)

    Kover, Sara T.; McDuffie, Andrea; Abbeduto, Leonard; Brown, W. Ted

    2012-01-01

    Purpose: In this study, the authors examined the impact of sampling context on multiple aspects of expressive language in male participants with fragile X syndrome in comparison to male participants with Down syndrome or typical development. Method: Participants with fragile X syndrome (n = 27), ages 10-17 years, were matched groupwise on…

  6. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  7. Speech Characteristics and Intelligibility in Adults with Mild and Moderate Intellectual Disabilities

    NARCIS (Netherlands)

    Coppens-Hofman, Marjolein C; Terband, Hayo; Snik, Ad F M; Maassen, Ben A M

    2017-01-01

    PURPOSE: Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. METHOD: Spontaneous speech and

  8. Speech characteristics and intelligibility in adults with mild and moderate intellectual disabilities

    NARCIS (Netherlands)

    Coppens-Hofman, , Marjolein; Terband, H.R.; Snik, A.F.M.; Maassen, Ben

    2016-01-01

    Purpose: Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. Method: Spontaneous speech and

  9. Abordagem citogenética e molecular em material de abortos espontâneos Cytogenetic and molecular evaluation of spontaneous abortion samples

    Directory of Open Access Journals (Sweden)

    Andréa Cristina de Moraes

    2005-09-01

    Full Text Available OBJETIVOS: avaliar o desempenho da citogenética e das técnicas de hibridização in situ fluorescente (FISH e reação em cadeia da polimerase (PCR no estudo das aneuploidias cromossômicas numéricas e na determinação do sexo fetal em amostras de abortos espontâneos. MÉTODOS: duzentos e dezenove amostras de produtos de abortos espontâneos foram submetidas a estudo citogenético. Deste total, 40 amostras foram também submetidas à técnica de PCR-nested para a determinação do sexo fetal: 32 foram selecionadas devido à falha de crescimento no estudo citogenético e oito foram escolhidas ao acaso. Vinte amostras foram selecionadas para detecção de aneuploidias cromossômicas pela técnica de FISH, utilizando-se sondas para os cromossomos 13, 18, 21, X e Y: 13 casos foram submetidos a FISH devido à falha de crescimento no estudo citogenético e sete foram escolhidos ao acaso. Foi calculada a taxa de sucesso (obtenção de cariótipo de cada técnica. Para comparação das taxas de sucesso foi utilizado o teste de chi2, sendo considerados significantes resultados com pPURPOSE: to evaluate the performance of cytogenetic analysis, fluorescent in situ hybridization (FISH and polymerase chain reaction (PCR in the study of numerical chromosomal anomalies and in fetal sex determination of spontaneous abortion material. METHODS: cytogenetic analysis was performed on 219 spontaneous abortion specimens. Forty of these cases were also submitted to fetal sex determination using nested-PCR. Thirty-two of these cases were selected due to failed cytogenetic culture and the other eight were selected randomly. Twenty samples were submitted to the FISH technique, using probes for chromosomes 13, 18, 21, X and Y. Thirteen of these samples were selected due to failed cytogenetic culture and the other seven were randomly selected. The success rates of each technique were compared using the chi2 test and an established p<0.05 level of significance. The

  10. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech

    Science.gov (United States)

    Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-01-01

    A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production

  11. Effects of the Syntactic Complexity on Speech Dysfluency of Stuttering Persian-Speaking Children and Adults in Conversational Speech

    Directory of Open Access Journals (Sweden)

    Behrooz Mahmoodi Bakhtiari

    2012-10-01

    Full Text Available Background and Aim: Recently, researchers have increasingly turned to study the relation between stuttering and syntactic complexity. This study investigates the effect of syntactic complexity on theamount of speech dysfluency in stuttering Persian-speaking children and adults in conversational speech. The obtained results can pave the way to a better understanding of stuttering in children andadults, and finding more appropriate treatments.Methods: In this cross-sectional study, the participants were 15 stuttering adult Persian-speakers, older than 15 years, and 15 stuttering child Persian-speakers of 4-6 years of age. In this study, first a 30 minute sample of the spontaneous speech of the participants was provided. Then the utterances of each person were studied in respect to the amount of dysfluency and syntactic complexity. The obtained information was analyzed using paired samples t-test.Results: In both groups of stuttering children and adults, there was a significant difference between the amount of dysfluency of simple and complex sentences (p<0.05.Conclusion: The results of this study showed that an increase in syntactic complexity in conversational speech, increased the amount of dysfluency in stuttering children and adults. Moreover,as a result of increase of syntactic complexity, dysfluency had a greater increase in stuttering children than stuttering adults.

  12. Variability and Intelligibility of Clarified Speech to Different Listener Groups

    Science.gov (United States)

    Silber, Ronnie F.

    Two studies examined the modifications that adult speakers make in speech to disadvantaged listeners. Previous research that has focused on speech to the deaf individuals and to young children has shown that adults clarify speech when addressing these two populations. Acoustic measurements suggest that the signal undergoes similar changes for both populations. Perceptual tests corroborate these results for the deaf population, but are nonsystematic in developmental studies. The differences in the findings for these populations and the nonsystematic results in the developmental literature may be due to methodological factors. The present experiments addressed these methodological questions. Studies of speech to hearing impaired listeners have used read, nonsense, sentences, for which speakers received explicit clarification instructions and feedback, while in the child literature, excerpts of real-time conversations were used. Therefore, linguistic samples were not precisely matched. In this study, experiments used various linguistic materials. Experiment 1 used a children's story; experiment 2, nonsense sentences. Four mothers read both types of material in four ways: (1) in "normal" adult speech, (2) in "babytalk," (3) under the clarification instructions used in the "hearing impaired studies" (instructed clear speech) and (4) in (spontaneous) clear speech without instruction. No extra practice or feedback was given. Sentences were presented to 40 normal hearing college students with and without simultaneous masking noise. Results were separately tabulated for content and function words, and analyzed using standard statistical tests. The major finding in the study was individual variation in speaker intelligibility. "Real world" speakers vary in their baseline intelligibility. The four speakers also showed unique patterns of intelligibility as a function of each independent variable. Results were as follows. Nonsense sentences were less intelligible than story

  13. Neural Entrainment to Speech Modulates Speech Intelligibility

    OpenAIRE

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and acoustic speech signal, listening task, and speech intelligibility have been observed repeatedly. However, a methodological bottleneck has prevented so far clarifying whether speech-brain entrainme...

  14. Speech Development

    Science.gov (United States)

    ... are placed in the mouth, much like an orthodontic retainer. The two most common types are 1) the speech bulb and 2) the palatal lift. The speech bulb is designed to partially close off the space between the soft palate and the throat. The palatal lift appliance serves to lift the soft palate to a ...

  15. Applying meta-pathway analyses through metagenomics to identify the functional properties of the major bacterial communities of a single spontaneous cocoa bean fermentation process sample.

    Science.gov (United States)

    Illeghems, Koen; Weckx, Stefan; De Vuyst, Luc

    2015-09-01

    A high-resolution functional metagenomic analysis of a representative single sample of a Brazilian spontaneous cocoa bean fermentation process was carried out to gain insight into its bacterial community functioning. By reconstruction of microbial meta-pathways based on metagenomic data, the current knowledge about the metabolic capabilities of bacterial members involved in the cocoa bean fermentation ecosystem was extended. Functional meta-pathway analysis revealed the distribution of the metabolic pathways between the bacterial members involved. The metabolic capabilities of the lactic acid bacteria present were most associated with the heterolactic fermentation and citrate assimilation pathways. The role of Enterobacteriaceae in the conversion of substrates was shown through the use of the mixed-acid fermentation and methylglyoxal detoxification pathways. Furthermore, several other potential functional roles for Enterobacteriaceae were indicated, such as pectinolysis and citrate assimilation. Concerning acetic acid bacteria, metabolic pathways were partially reconstructed, in particular those related to responses toward stress, explaining their metabolic activities during cocoa bean fermentation processes. Further, the in-depth metagenomic analysis unveiled functionalities involved in bacterial competitiveness, such as the occurrence of CRISPRs and potential bacteriocin production. Finally, comparative analysis of the metagenomic data with bacterial genomes of cocoa bean fermentation isolates revealed the applicability of the selected strains as functional starter cultures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  17. Use of exclusion by a chimpanzee (Pan troglodytes) during speech perception and auditory-visual matching-to-sample.

    Science.gov (United States)

    Beran, Michael J

    2010-03-01

    An adult female chimpanzee showed responding through use of exclusion in an auditory to visual matching-to-sample procedure. The chimpanzee had previously learned to associate specific visuographic symbols called lexigrams with real world referents and the spoken English words and photographs for those referents. On some trials, an unknown spoken English word was presented as the sample, and the match choices could consist of photographs or lexigrams that already were associated with known English words as well as unknown lexigrams or photos of objects without associated lexigrams. The chimpanzee reliably avoided choosing known comparisons for these unknown samples, instead relying on exclusion to choose comparisons that were of unknown lexigrams or photographs of items without associated lexigram symbols. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  18. The psychologist as an interlocutor in autism spectrum disorder assessment: insights from a study of spontaneous prosody.

    Science.gov (United States)

    Bone, Daniel; Lee, Chi-Chun; Black, Matthew P; Williams, Marian E; Lee, Sungbok; Levitt, Pat; Narayanan, Shrikanth

    2014-08-01

    The purpose of this study was to examine relationships between prosodic speech cues and autism spectrum disorder (ASD) severity, hypothesizing a mutually interactive relationship between the speech characteristics of the psychologist and the child. The authors objectively quantified acoustic-prosodic cues of the psychologist and of the child with ASD during spontaneous interaction, establishing a methodology for future large-sample analysis. Speech acoustic-prosodic features were semiautomatically derived from segments of semistructured interviews (Autism Diagnostic Observation Schedule, ADOS; Lord, Rutter, DiLavore, & Risi, 1999; Lord et al., 2012) with 28 children who had previously been diagnosed with ASD. Prosody was quantified in terms of intonation, volume, rate, and voice quality. Research hypotheses were tested via correlation as well as hierarchical and predictive regression between ADOS severity and prosodic cues. Automatically extracted speech features demonstrated prosodic characteristics of dyadic interactions. As rated ASD severity increased, both the psychologist and the child demonstrated effects for turn-end pitch slope, and both spoke with atypical voice quality. The psychologist's acoustic cues predicted the child's symptom severity better than did the child's acoustic cues. The psychologist, acting as evaluator and interlocutor, was shown to adjust his or her behavior in predictable ways based on the child's social-communicative impairments. The results support future study of speech prosody of both interaction partners during spontaneous conversation, while using automatic computational methods that allow for scalable analysis on much larger corpora.

  19. Spontaneous deregulation

    NARCIS (Netherlands)

    Edelman, Benjamin; Geradin, Damien

    Platform businesses such as Airbnb and Uber have risen to success partly by sidestepping laws and regulations that encumber their traditional competitors. Such rule flouting is what the authors call “spontaneous private deregulation,” and it’s happening in a growing number of industries. The authors

  20. Outcome Measurement Using Naturalistic Language Samples: A Feasibility Pilot Study Using Language Transcription Software and Speech and Language Therapy Assistants

    Science.gov (United States)

    Overton, Sarah; Wren, Yvonne

    2014-01-01

    The ultimate aim of intervention for children with language impairment is an improvement in their functional language skills. Baseline and outcome measurement of this is often problematic however and practitioners commonly resort to using formal assessments that may not adequately reflect the child's competence. Language sampling,…

  1. Features and machine learning classification of connected speech samples from patients with autopsy proven Alzheimer's disease with and without additional vascular pathology.

    Science.gov (United States)

    Rentoumi, Vassiliki; Raoufian, Ladan; Ahmed, Samrah; de Jager, Celeste A; Garrard, Peter

    2014-01-01

    Mixed vascular and Alzheimer-type dementia and pure Alzheimer's disease are both associated with changes in spoken language. These changes have, however, seldom been subjected to systematic comparison. In the present study, we analyzed language samples obtained during the course of a longitudinal clinical study from patients in whom one or other pathology was verified at post mortem. The aims of the study were twofold: first, to confirm the presence of differences in language produced by members of the two groups using quantitative methods of evaluation; and secondly to ascertain the most informative sources of variation between the groups. We adopted a computational approach to evaluate digitized transcripts of connected speech along a range of language-related dimensions. We then used machine learning text classification to assign the samples to one of the two pathological groups on the basis of these features. The classifiers' accuracies were tested using simple lexical features, syntactic features, and more complex statistical and information theory characteristics. Maximum accuracy was achieved when word occurrences and frequencies alone were used. Features based on syntactic and lexical complexity yielded lower discrimination scores, but all combinations of features showed significantly better performance than a baseline condition in which every transcript was assigned randomly to one of the two classes. The classification results illustrate the word content specific differences in the spoken language of the two groups. In addition, those with mixed pathology were found to exhibit a marked reduction in lexical variation and complexity compared to their pure AD counterparts.

  2. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    Directory of Open Access Journals (Sweden)

    Denis Arnold

    Full Text Available Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%, without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

  3. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  4. The contrast between alveolar and velar stops with typical speech data: acoustic and articulatory analyses.

    Science.gov (United States)

    Melo, Roberta Michelon; Mota, Helena Bolli; Berti, Larissa Cristina

    2017-06-08

    This study used acoustic and articulatory analyses to characterize the contrast between alveolar and velar stops with typical speech data, comparing the parameters (acoustic and articulatory) of adults and children with typical speech development. The sample consisted of 20 adults and 15 children with typical speech development. The analyzed corpus was organized through five repetitions of each target-word (/'kap ə/, /'tapə/, /'galo/ e /'daɾə/). These words were inserted into a carrier phrase and the participant was asked to name them spontaneously. Simultaneous audio and video data were recorded (tongue ultrasound images). The data was submitted to acoustic analyses (voice onset time; spectral peak and burst spectral moments; vowel/consonant transition and relative duration measures) and articulatory analyses (proportion of significant axes of the anterior and posterior tongue regions and description of tongue curves). Acoustic and articulatory parameters were effective to indicate the contrast between alveolar and velar stops, mainly in the adult group. Both speech analyses showed statistically significant differences between the two groups. The acoustic and articulatory parameters provided signals to characterize the phonic contrast of speech. One of the main findings in the comparison between adult and child speech was evidence of articulatory refinement/maturation even after the period of segment acquisition.

  5. Neural entrainment to speech modulates speech intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Başkent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  6. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  7. Measuring Speech Comprehensibility in Students with Down Syndrome

    Science.gov (United States)

    Yoder, Paul J.; Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose: There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based…

  8. The Motivational Function of Private Speech: An Experimental Approach.

    Science.gov (United States)

    de Dios, M. J.; Montero, I.

    Recently, some works have been published exploring the role of private speech as a tool for motivation, reaching beyond the classical research on its regulatory function for cognitive processes such as attention or executive function. In fact, the authors' own previous research has shown that a moderate account of spontaneous private speech of…

  9. Speech Characteristics Associated with Three Genotypes of Ataxia

    Science.gov (United States)

    Sidtis, John J.; Ahn, Ji Sook; Gomez, Christopher; Sidtis, Diana

    2011-01-01

    Purpose: Advances in neurobiology are providing new opportunities to investigate the neurological systems underlying motor speech control. This study explores the perceptual characteristics of the speech of three genotypes of spino-cerebellar ataxia (SCA) as manifest in four different speech tasks. Methods: Speech samples from 26 speakers with SCA…

  10. The spontaneous use of Hebrew verb forms by Israeli preschool children with and without sli The spontaneous use of Hebrew verb forms by Israeli preschool children with and without sli

    Directory of Open Access Journals (Sweden)

    Esther Dromi

    2008-04-01

    Full Text Available In this article we present findings on the spontaneous use of verb forms by preschool Hebrew speaking children who were diagnosed as SLI (Specific Language Impairment and by younger normally developing (ND - L children who were matched by language level to the SLI group. We evaluate the spontaneous use of verb forms in obligatory contexts and compare it with previous results on the morphological abilities of SLI and ND-L children in elicitation tasks. This article reviews previous published findings on verb elicitation tasks and report new data on the use of Hebrew verb forms in spontaneous language samples. Results indicate that HSLI (High Specific Language Impairment children produce verb forms as successfully as their utterance length in morphemes lead to expect. This is especially true when the verb forms they use belong to simple verb patterns. The difficulty HSLI children face with respect to verb morphology is selective rather than sweeping, and it is not evident in the spontaneous speech samples because in this context children avoid producing complex verb forms. The article highlights the position that in languages with rich inflectional morphology it is always useful to combine elicited and spontaneous research methods for studying the productive morphological abilities of young children. In this article we present findings on the spontaneous use of verb forms by preschool Hebrew speaking children who were diagnosed as SLI (Specific Language Impairment and by younger normally developing (ND - L children who were matched by language level to the SLI group. We evaluate the spontaneous use of verb forms in obligatory contexts and compare it with previous results on the morphological abilities of SLI and ND-L children in elicitation tasks. This article reviews previous published findings on verb elicitation tasks and report new data on the use of Hebrew verb forms in spontaneous language samples. Results indicate that HSLI (High

  11. Apraxia of Speech

    Science.gov (United States)

    ... here Home » Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... additional information about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  12. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    Science.gov (United States)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners

  13. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  14. [Spontaneous bacterial peritonitis].

    Science.gov (United States)

    Strauss, Edna; Caly, Wanda Regina

    2003-01-01

    Spontaneous bacterial peritonitis occurs in 30% of patients with ascites due to cirrhosis leading to high morbidity and mortality rates. The pathogenesis of spontaneous bacterial peritonitis is related to altered host defenses observed in end-stage liver disease, overgrowth of microorganisms, and bacterial translocation from the intestinal lumen to mesenteric lymph nodes. Clinical manifestations vary from severe to slight or absent, demanding analysis of the ascitic fluid. The diagnosis is confirmed by a number of neutrophils over 250/mm3 associated or not to bacterial growth in culture of an ascites sample. Enterobacteriae prevail and Escherichia coli has been the most frequent bacterium reported. Mortality rates decreased markedly in the last two decades due to early diagnosis and prompt antibiotic treatment. Third generation intravenous cephalosporins are effective in 70% to 95% of the cases. Recurrence of spontaneous bacterial peritonitis is common and can be prevented by the continuous use of oral norfloxacin. The development of bacterial resistance demands the search for new options in the prophylaxis of spontaneous bacterial peritonitis; probiotics are a promising new approach, but deserve further evaluation. Short-term antibiotic prophylaxis is recommended for patients with cirrhosis and ascites shortly after an acute episode of gastrointestinal bleeding.

  15. Perceived Speech Quality Estimation Using DTW Algorithm

    Directory of Open Access Journals (Sweden)

    S. Arsenovski

    2009-06-01

    Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.

  16. Automatic Smoker Detection from Telephone Speech Signals

    DEFF Research Database (Denmark)

    Alavijeh, Amir Hossein Poorjam; Hesaraki, Soheila; Safavi, Saeid

    2017-01-01

    This paper proposes an automatic smoking habit detection from spontaneous telephone speech signals. In this method, each utterance is modeled using i-vector and non-negative factor analysis (NFA) frameworks, which yield low-dimensional representation of utterances by applying factor analysis on G...

  17. Speech Enhancement

    DEFF Research Database (Denmark)

    Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    and their performance bounded and assessed in terms of noise reduction and speech distortion. The book shows how various filter designs can be obtained in this framework, including the maximum SNR, Wiener, LCMV, and MVDR filters, and how these can be applied in various contexts, like in single-channel and multichannel...

  18. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  19. 78 FR 49717 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay...

  20. Effects of the Utterance length on Fluency of Conversational Speech in Stuttering Persian-Speaker Children and Adults

    Directory of Open Access Journals (Sweden)

    Tabassom A'zimi

    2013-10-01

    Full Text Available Objective: recently, researchers have increasingly turned to study the relation between stuttering and utterance length. This study investigates the effect of utterance length on the amount of speech dysfluency in stuttering Persian-speaking children and adults in conversational speech. The obtained results can pave the way to reach a better understanding of stuttering of child and adults, as well as finding more appropriate treatments. Materials & Methods: in this descriptive- analysis study, the participants were 15 stuttering Persian- speaker adults, upper from 15 years old, and 15 stuttering Persian- speaker children in the age range of 4-6. In this study, first 30 minutes sample of adults and child's spontaneous speech was provided and then utterances of each person studied for the amount of dysfluency and utterance length. The obtained information intered to computer via spss software and analyzed using paired T test. Results: In both groups of stuttering children and adults, with increase of utterance length, there was a significant increase in the amount of dysfluency. Conclusion: The results of this study showed that by increase of utterance length at the spontaneous speech level, stuttering children and adults had more dysfluency amount. Also, by increase of utterance length, dysfluency amount of stuttering children and adults increased samely.

  1. Speech repetition as a window on the neurobiology of auditory-motor integration for speech: A voxel-based lesion symptom mapping study.

    Science.gov (United States)

    Rogalsky, Corianne; Poppa, Tasha; Chen, Kuan-Hua; Anderson, Steven W; Damasio, Hanna; Love, Tracy; Hickok, Gregory

    2015-05-01

    For more than a century, speech repetition has been used as an assay for gauging the integrity of the auditory-motor pathway in aphasia, thought classically to involve a linkage between Wernicke's area and Broca's area via the arcuate fasciculus. During the last decade, evidence primarily from functional imaging in healthy individuals has refined this picture both computationally and anatomically, suggesting the existence of a cortical hub located at the parietal-temporal boundary (area Spt) that functions to integrate auditory and motor speech networks for both repetition and spontaneous speech production. While functional imaging research can pinpoint the regions activated in repetition/auditory-motor integration, lesion-based studies are needed to infer causal involvement. Previous lesion studies of repetition have yielded mixed results with respect to Spt's critical involvement in speech repetition. The present study used voxel-based lesion symptom mapping (VLSM) to investigate the neuroanatomy of repetition of both real words and non-words in a sample of 47 patients with focal left hemisphere brain damage. VLSMs identified a large voxel cluster spanning gray and white matter in the left temporal-parietal junction, including area Spt, where damage was significantly related to poor non-word repetition. Repetition of real words implicated a very similar dorsal network including area Spt. Cortical regions including Spt were implicated in repetition performance even when white matter damage was factored out. In addition, removing variance associated with speech perception abilities did not alter the overall lesion pattern for either task. Together with past functional imaging work, our results suggest that area Spt is integral in both word and non-word repetition, that its contribution is above and beyond that made by white matter pathways, and is not driven by perceptual processes alone. These findings are highly consistent with the claim that Spt is an area of

  2. Clear Speech Modifications in Children Aged 6-10

    Science.gov (United States)

    Taylor, Griffin Lijding

    Modifications to speech production made by adult talkers in response to instructions to speak clearly have been well documented in the literature. Targeting adult populations has been motivated by efforts to improve speech production for the benefit of the communication partners, however, many adults also have communication partners who are children. Surprisingly, there is limited literature on whether children can change their speech production when cued to speak clearly. Pettinato, Tuomainen, Granlund, and Hazan (2016) showed that by age 12, children exhibited enlarged vowel space areas and reduced articulation rate when prompted to speak clearly, but did not produce any other adult-like clear speech modifications in connected speech. Moreover, Syrett and Kawahara (2013) suggested that preschoolers produced longer and more intense vowels when prompted to speak clearly at the word level. These findings contrasted with adult talkers who show significant temporal and spectral differences between speech produced in control and clear speech conditions. Therefore, it was the purpose of this study to analyze changes in temporal and spectral characteristics of speech production that children aged 6-10 made in these experimental conditions. It is important to elucidate the clear speech profile of this population to better understand which adult-like clear speech modifications they make spontaneously and which modifications are still developing. Understanding these baselines will advance future studies that measure the impact of more explicit instructions and children's abilities to better accommodate their interlocutors, which is a critical component of children's pragmatic and speech-motor development.

  3. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... amends telecommunications relay services (TRS) mandatory minimum standards applicable to Speech- to...

  4. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  5. Fast Keyword Spotting in Telephone Speech

    Directory of Open Access Journals (Sweden)

    J. Nouza

    2009-12-01

    Full Text Available In the paper, we present a system designed for detecting keywords in telephone speech. We focus not only on achieving high accuracy but also on very short processing time. The keyword spotting system can run in three modes: a an off-line mode requiring less than 0.1xRT, b an on-line mode with minimum (2 s latency, and c a repeated spotting mode, in which pre-computed values allow for additional acceleration. Its performance is evaluated on recordings of Czech spontaneous telephone speech using rather large and complex keyword lists.

  6. Speech Intelligibility in Severe Adductor Spasmodic Dysphonia

    Science.gov (United States)

    Bender, Brenda K.; Cannito, Michael P.; Murry, Thomas; Woodson, Gayle E.

    2004-01-01

    This study compared speech intelligibility in nondisabled speakers and speakers with adductor spasmodic dysphonia (ADSD) before and after botulinum toxin (Botox) injection. Standard speech samples were obtained from 10 speakers diagnosed with severe ADSD prior to and 1 month following Botox injection, as well as from 10 age- and gender-matched…

  7. Standardization of Speech Corpus

    Directory of Open Access Journals (Sweden)

    Ai-jun Li

    2007-12-01

    Full Text Available Speech corpus is the basis for analyzing the characteristics of speech signals and developing speech synthesis and recognition systems. In China, almost all speech research and development affiliations are developing their own speech corpora. We have so many different kinds numbers of Chinese speech corpora that it is important to be able to conveniently share these speech corpora to avoid wasting time and money and to make research work more efficient. The primary goal of this research is to find a standard scheme which can make the corpus be established more efficiently and be used or shared more easily. A huge speech corpus on 10 regional accented Chinese, RASC863 (a Regional Accent Speech Corpus funded by National 863 Project will be exemplified to illuminate the standardization of speech corpus production.

  8. The Effect of Background Noise on Intelligibility of Dysphonic Speech

    Science.gov (United States)

    Ishikawa, Keiko; Boyce, Suzanne; Kelchner, Lisa; Powell, Maria Golla; Schieve, Heidi; de Alarcon, Alessandro; Khosla, Sid

    2017-01-01

    Purpose: The aim of this study is to determine the effect of background noise on the intelligibility of dysphonic speech and to examine the relationship between intelligibility in noise and an acoustic measure of dysphonia--cepstral peak prominence (CPP). Method: A study of speech perception was conducted using speech samples from 6 adult speakers…

  9. Technical foundations of TANDEM-STRAIGHT, a speech analysis ...

    Indian Academy of Sciences (India)

    Speech analysis; fundamental frequency; speech synthesis; consistent sampling; periodic signals. Abstract. This article presents comprehensive technical information about STRAIGHT and TANDEM-STRAIGHT, a widely used speech modification tool and its successor. They share the same concept: the periodic excitation ...

  10. An acoustical assessment of pitch-matching accuracy in relation to speech frequency, speech frequency range, age and gender in preschool children

    Science.gov (United States)

    Trollinger, Valerie L.

    This study investigated the relationship between acoustical measurement of singing accuracy in relationship to speech fundamental frequency, speech fundamental frequency range, age and gender in preschool-aged children. Seventy subjects from Southeastern Pennsylvania; the San Francisco Bay Area, California; and Terre Haute, Indiana, participated in the study. Speech frequency was measured by having the subjects participate in spontaneous and guided speech activities with the researcher, with 18 diverse samples extracted from each subject's recording for acoustical analysis for fundamental frequency in Hz with the CSpeech computer program. The fundamental frequencies were averaged together to derive a mean speech frequency score for each subject. Speech range was calculated by subtracting the lowest fundamental frequency produced from the highest fundamental frequency produced, resulting in a speech range measured in increments of Hz. Singing accuracy was measured by having the subjects each echo-sing six randomized patterns using the pitches Middle C, D, E, F♯, G and A (440), using the solfege syllables of Do and Re, which were recorded by a 5-year-old female model. For each subject, 18 samples of singing were recorded. All samples were analyzed by the CSpeech for fundamental frequency. For each subject, deviation scores in Hz were derived by calculating the difference between what the model sang in Hz and what the subject sang in response in Hz. Individual scores for each child consisted of an overall mean total deviation frequency, mean frequency deviations for each pattern, and mean frequency deviation for each pitch. Pearson correlations, MANOVA and ANOVA analyses, Multiple Regressions and Discriminant Analysis revealed the following findings: (1) moderate but significant (p E, F♯, G and A in the study; (2) mean speech frequency also emerged as the strongest predictor of subjects' ability to sing the notes E and F♯; (3) mean speech frequency correlated

  11. Preschool speech intelligibility and vocabulary skills predict long-term speech and language outcomes following cochlear implantation in early childhood.

    Science.gov (United States)

    Castellanos, Irina; Kronenberger, William G; Beer, Jessica; Henning, Shirley C; Colson, Bethany G; Pisoni, David B

    2014-07-01

    Speech and language measures during grade school predict adolescent speech-language outcomes in children who receive cochlear implants (CIs), but no research has examined whether speech and language functioning at even younger ages is predictive of long-term outcomes in this population. The purpose of this study was to examine whether early preschool measures of speech and language performance predict speech-language functioning in long-term users of CIs. Early measures of speech intelligibility and receptive vocabulary (obtained during preschool ages of 3-6 years) in a sample of 35 prelingually deaf, early-implanted children predicted speech perception, language, and verbal working memory skills up to 18 years later. Age of onset of deafness and age at implantation added additional variance to preschool speech intelligibility in predicting some long-term outcome scores, but the relationship between preschool speech-language skills and later speech-language outcomes was not significantly attenuated by the addition of these hearing history variables. These findings suggest that speech and language development during the preschool years is predictive of long-term speech and language functioning in early-implanted, prelingually deaf children. As a result, measures of speech-language functioning at preschool ages can be used to identify and adjust interventions for very young CI users who may be at long-term risk for suboptimal speech and language outcomes.

  12. Speech-Language Pathologists' Assessment Practices for Children with Suspected Speech Sound Disorders: Results of a National Survey

    Science.gov (United States)

    Skahan, Sarah M.; Watson, Maggie; Lof, Gregory L.

    2007-01-01

    Purpose: This study examined assessment procedures used by speech-language pathologists (SLPs) when assessing children suspected of having speech sound disorders (SSD). This national survey also determined the information participants obtained from clients' speech samples, evaluation of non-native English speakers, and time spent on assessment.…

  13. Analysis of Intonation Patterns in Cantonese Aphasia Speech.

    Science.gov (United States)

    Lee, Tan; Lam, Wang Kong; Kong, Anthony Pak Hin; Law, Sam Po

    2015-10-01

    This paper presents a study on intonation patterns in Cantonese aphasia speech. The speech materials were spontaneous discourse recorded from seven pairs of aphasic and unimpaired speakers. Hidden Markov model based forced alignment was applied to obtain syllable-level time alignments. The pitch level of each syllable was determined and normalized according to the given tone identity of the syllable. Linear regression of the normalized pitch levels was performed to describe the intonation patterns of sentences. It was found that aphasic speech has a higher percentage of sentences with increasing pitch. This trend was found to be more prominent in story-telling than descriptive discourses.

  14. Spontaneous pneumothorax in weightlifters.

    Science.gov (United States)

    Marnejon, T; Sarac, S; Cropp, A J

    1995-06-01

    Spontaneous pneumothorax is infrequently caused by strenuous exertion. To our knowledge there has only been one case of spontaneous pneumothorax associated with weightlifting reported in the medical literature. We describe three consecutive cases of spontaneous pneumothorax associated with weightlifting. We postulate that spontaneous pneumothorax in these patients may be secondary to improper breathing techniques. It is important that physicians and weight trainers be aware of the association between weight lifting and spontaneous pneumothorax and assure that proper instruction is given to athletes who work with weights.

  15. Tools for the assessment of childhood apraxia of speech.

    Science.gov (United States)

    Gubiani, Marileda Barichello; Pagliarin, Karina Carlesso; Keske-Soares, Marcia

    2015-01-01

    This study systematically reviews the literature on the main tools used to evaluate childhood apraxia of speech (CAS). The search strategy includes Scopus, PubMed, and Embase databases. Empirical studies that used tools for assessing CAS were selected. Articles were selected by two independent researchers. The search retrieved 695 articles, out of which 12 were included in the study. Five tools were identified: Verbal Motor Production Assessment for Children, Dynamic Evaluation of Motor Speech Skill, The Orofacial Praxis Test, Kaufman Speech Praxis Test for Children, and Madison Speech Assessment Protocol. There are few instruments available for CAS assessment and most of them are intended to assess praxis and/or orofacial movements, sequences of orofacial movements, articulation of syllables and phonemes, spontaneous speech, and prosody. There are some tests for assessment and diagnosis of CAS. However, few studies on this topic have been conducted at the national level, as well as protocols to assess and assist in an accurate diagnosis.

  16. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-12-31

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.

  17. [Restoration of speech in aphasia during the post hospital period based on the "speech donorship" method and a family speech discourse].

    Science.gov (United States)

    Rudnev, V A; Shteĭnerdt, V V

    2010-01-01

    The method of "speech donorship" is based on genetically mediated factors of tempo-rhythmic concordance of speech in monozygotic twins (co-twins) and pairs of close relatives (father-son, mother-daughter, sibs). Recording of the natural audiovisual donor sample of the speech adapted for a structurally-linguistic condition of speech of the recipient was carried out on a digital movie camera. This sample is defined using the data of computer transformation obtained by the program specially developed by the authors. The program allows to compute time equivalents of three parameters: the time spent for realization of "word", "pause", "word + pauses". Work of the recipient with the screen donor sample assumes a support of the restoration of genetic and adaptive speech patterns. Then the recipient works with the own audiovisual sample. The dictionary of a family speech was used to build tests. The use of this method was described for 15 patients with aphasia of vascular and traumatic etiology.

  18. Speech and Language Developmental Milestones

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Speech and Language Developmental Milestones On this page: How do speech ... and language developmental milestones? How do speech and language develop? The first 3 years of life, when ...

  19. Delayed Speech or Language Development

    Science.gov (United States)

    ... Safe Videos for Educators Search English Español Delayed Speech or Language Development KidsHealth / For Parents / Delayed Speech ... their child is right on schedule. How Are Speech and Language Different? Speech is the verbal expression ...

  20. Autonomic and Emotional Responses of Graduate Student Clinicians in Speech-Language Pathology to Stuttered Speech

    Science.gov (United States)

    Guntupalli, Vijaya K.; Nanjundeswaran, Chayadevie; Dayalu, Vikram N.; Kalinowski, Joseph

    2012-01-01

    Background: Fluent speakers and people who stutter manifest alterations in autonomic and emotional responses as they view stuttered relative to fluent speech samples. These reactions are indicative of an aroused autonomic state and are hypothesized to be triggered by the abrupt breakdown in fluency exemplified in stuttered speech. Furthermore,…

  1. Speech and Communication Disorders

    Science.gov (United States)

    ... Speech problems like stuttering Developmental disabilities Learning disorders Autism spectrum disorder Brain injury Stroke Some speech and communication problems may be genetic. Often, no one knows the causes. By first grade, about 5 percent of children ...

  2. Speech disorders - children

    Science.gov (United States)

    ... after age 4 (I want...I want my doll. I...I see you.) Putting in (interjecting) extra ... may outgrow milder forms of speech disorders. Speech therapy may help with more severe symptoms or any ...

  3. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2010-01-01

    In a sample of 46 children aged 4 to 7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants’ speech, prosody, and voice were compared with data from 40 typically-developing children, 13 preschool children with Speech Delay, and 15 participants aged 5 to 49 years with CAS in neurogenetic disorders. Speech Delay and Speech Errors, respectively, were modestly and substantially more prevalent in participants with ASD than reported population estimates. Double dissociations in speech, prosody, and voice impairments in ASD were interpreted as consistent with a speech attunement framework, rather than with the motor speech impairments that define CAS. Key Words: apraxia, dyspraxia, motor speech disorder, speech sound disorder PMID:20972615

  4. Segmentation cues in conversational speech: robust semantics and fragile phonotactics.

    Science.gov (United States)

    White, Laurence; Mattys, Sven L; Wiget, Lukas

    2012-01-01

    Multiple cues influence listeners' segmentation of connected speech into words, but most previous studies have used stimuli elicited in careful readings rather than natural conversation. Discerning word boundaries in conversational speech may differ from the laboratory setting. In particular, a speaker's articulatory effort - hyperarticulation vs. hypoarticulation (H&H) - may vary according to communicative demands, suggesting a compensatory relationship whereby acoustic-phonetic cues are attenuated when other information sources strongly guide segmentation. We examined how listeners' interpretation of segmentation cues is affected by speech style (spontaneous conversation vs. read), using cross-modal identity priming. To elicit spontaneous stimuli, we used a map task in which speakers discussed routes around stylized landmarks. These landmarks were two-word phrases in which the strength of potential segmentation cues - semantic likelihood and cross-boundary diphone phonotactics - was systematically varied. Landmark-carrying utterances were transcribed and later re-recorded as read speech. Independent of speech style, we found an interaction between cue valence (favorable/unfavorable) and cue type (phonotactics/semantics). Thus, there was an effect of semantic plausibility, but no effect of cross-boundary phonotactics, indicating that the importance of phonotactic segmentation may have been overstated in studies where lexical information was artificially suppressed. These patterns were unaffected by whether the stimuli were elicited in a spontaneous or read context, even though the difference in speech styles was evident in a main effect. Durational analyses suggested speaker-driven cue trade-offs congruent with an H&H account, but these modulations did not impact on listener behavior. We conclude that previous research exploiting read speech is reliable in indicating the primacy of lexically based cues in the segmentation of natural conversational speech.

  5. Segmentation cues in conversational speech: Robust semantics and fragile phonotactics

    Directory of Open Access Journals (Sweden)

    Laurence eWhite

    2012-10-01

    Full Text Available Multiple cues influence listeners’ segmentation of connected speech into words, but most previous studies have used stimuli elicited in careful readings rather than natural conversation. Discerning word boundaries in conversational speech may differ from the laboratory setting. In particular, a speaker’s articulatory effort – hyperarticulation vs hypoarticulation (H&H – may vary according to communicative demands, suggesting a compensatory relationship whereby acoustic-phonetic cues are attenuated when other information sources strongly guide segmentation. We examined how listeners’ interpretation of segmentation cues is affected by speech style (spontaneous conversation vs read, using cross-modal identity priming. To elicit spontaneous stimuli, we used a map task in which speakers discussed routes around stylised landmarks. These landmarks were two-word phrases in which the strength of potential segmentation cues – semantic likelihood and cross-boundary diphone phonotactics – was systematically varied. Landmark-carrying utterances were transcribed and later re-recorded as read speech.Independent of speech style, we found an interaction between cue valence (favourable/unfavourable and cue type (phonotactics/semantics. Thus, there was an effect of semantic plausibility, but no effect of cross-boundary phonotactics, indicating that the importance of phonotactic segmentation may have been overstated in studies where lexical information was artificially suppressed. These patterns were unaffected by whether the stimuli were elicited in a spontaneous or read context, even though the difference in speech styles was evident in a main effect. Durational analyses suggested speaker-driven cue trade-offs congruent with an H&H account, but these modulations did not impact on listener behaviour. We conclude that previous research exploiting read speech is reliable in indicating the primacy of lexically-based cues in the segmentation of natural

  6. Dog-directed speech: why do we use it and do dogs pay attention to it?

    Science.gov (United States)

    Ben-Aderet, Tobey; Gallego-Abenza, Mario; Reby, David; Mathevon, Nicolas

    2017-01-11

    Pet-directed speech is strikingly similar to infant-directed speech, a peculiar speaking pattern with higher pitch and slower tempo known to engage infants' attention and promote language learning. Here, we report the first investigation of potential factors modulating the use of dog-directed speech, as well as its immediate impact on dogs' behaviour. We recorded adult participants speaking in front of pictures of puppies, adult and old dogs, and analysed the quality of their speech. We then performed playback experiments to assess dogs' reaction to dog-directed speech compared with normal speech. We found that human speakers used dog-directed speech with dogs of all ages and that the acoustic structure of dog-directed speech was mostly independent of dog age, except for sound pitch which was relatively higher when communicating with puppies. Playback demonstrated that, in the absence of other non-auditory cues, puppies were highly reactive to dog-directed speech, and that the pitch was a key factor modulating their behaviour, suggesting that this specific speech register has a functional value in young dogs. Conversely, older dogs did not react differentially to dog-directed speech compared with normal speech. The fact that speakers continue to use dog-directed with older dogs therefore suggests that this speech pattern may mainly be a spontaneous attempt to facilitate interactions with non-verbal listeners. © 2017 The Author(s).

  7. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  8. Speech evaluation in children with temporomandibular disorders

    Directory of Open Access Journals (Sweden)

    Raquel Aparecida Pizolato

    2011-10-01

    Full Text Available OBJECTIVE: The aims of this study were to evaluate the influence of temporomandibular disorders (TMD on speech in children, and to verify the influence of occlusal characteristics. MATERIAL AND METHODS: Speech and dental occlusal characteristics were assessed in 152 Brazilian children (78 boys and 74 girls, aged 8 to 12 (mean age 10.05 ± 1.39 years with or without TMD signs and symptoms. The clinical signs were evaluated using the Research Diagnostic Criteria for TMD (RDC/TMD (axis I and the symptoms were evaluated using a questionnaire. The following groups were formed: Group TMD (n=40, TMD signs and symptoms (Group S and S, n=68, TMD signs or symptoms (Group S or S, n=33, and without signs and symptoms (Group N, n=11. Articulatory speech disorders were diagnosed during spontaneous speech and repetition of the words using the "Phonological Assessment of Child Speech" for the Portuguese language. It was also applied a list of 40 phonological balanced words, read by the speech pathologist and repeated by the children. Data were analyzed by descriptive statistics, Fisher's exact or Chi-square tests (α=0.05. RESULTS: A slight prevalence of articulatory disturbances, such as substitutions, omissions and distortions of the sibilants /s/ and /z/, and no deviations in jaw lateral movements were observed. Reduction of vertical amplitude was found in 10 children, the prevalence being greater in TMD signs and symptoms children than in the normal children. The tongue protrusion in phonemes /t/, /d/, /n/, /l/ and frontal lips in phonemes /s/ and /z/ were the most prevalent visual alterations. There was a high percentage of dental occlusal alterations. CONCLUSIONS: There was no association between TMD and speech disorders. Occlusal alterations may be factors of influence, allowing distortions and frontal lisp in phonemes /s/ and /z/ and inadequate tongue position in phonemes /t/; /d/; /n/; /l/.

  9. Surgical speech disorders.

    Science.gov (United States)

    Shen, Tianjie; Sie, Kathleen C Y

    2014-11-01

    Most speech disorders of childhood are treated with speech therapy. However, two conditions, ankyloglossia and velopharyngeal dysfunction, may be amenable to surgical intervention. It is important for surgeons to work with experienced speech language pathologists to diagnose the speech disorder. Children with articulation disorders related to ankyloglossia may benefit from frenuloplasty. Children with velopharyngeal dysfunction should have standardized clinical evaluation and instrumental asseessment of velopharyngeal function. Surgeons should develop a treatment protocol to optimize speech outcomes while minimizing morbidity. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  11. Spontaneous uterine rupture

    African Journals Online (AJOL)

    ABSTRACT. Rupture of a gravid uterus is a surgical emergency. Predisposing factors include a scarred uterus. Spontaneous rupture of an unscarred uterus during pregnancy is a rare occurrence. We hereby present the case of a spontaneous complete uterine rupture at a gestational age of 34 weeks in a 35 year old patient ...

  12. Spontaneous intracranial hypotension.

    LENUS (Irish Health Repository)

    Fullam, L

    2012-01-31

    INTRODUCTION: Spontaneous\\/primary intracranial hypotension is characterised by orthostatic headache and is associated with characteristic magnetic resonance imaging findings. CASE REPORT: We present a case report of a patient with typical symptoms and classical radiological images. DISCUSSION: Spontaneous intracranial hypotension is an under-recognised cause of headache and can be diagnosed by history of typical orthostatic headache and findings on MRI brain.

  13. Filled Pause Refinement Based on the Pronunciation Probability for Lecture Speech

    Science.gov (United States)

    Long, Yan-Hua; Ye, Hong

    2015-01-01

    Nowadays, although automatic speech recognition has become quite proficient in recognizing or transcribing well-prepared fluent speech, the transcription of speech that contains many disfluencies remains problematic, such as spontaneous conversational and lecture speech. Filled pauses (FPs) are the most frequently occurring disfluencies in this type of speech. Most recent studies have shown that FPs are widely believed to increase the error rates for state-of-the-art speech transcription, primarily because most FPs are not well annotated or provided in training data transcriptions and because of the similarities in acoustic characteristics between FPs and some common non-content words. To enhance the speech transcription system, we propose a new automatic refinement approach to detect FPs in British English lecture speech transcription. This approach combines the pronunciation probabilities for each word in the dictionary and acoustic language model scores for FP refinement through a modified speech recognition forced-alignment framework. We evaluate the proposed approach on the Reith Lectures speech transcription task, in which only imperfect training transcriptions are available. Successful results are achieved for both the development and evaluation datasets. Acoustic models trained on different styles of speech genres have been investigated with respect to FP refinement. To further validate the effectiveness of the proposed approach, speech transcription performance has also been examined using systems built on training data transcriptions with and without FP refinement. PMID:25860959

  14. Managing the reaction effects of speech disorders on speech ...

    African Journals Online (AJOL)

    ... persons having speech disorders. Speech disorders must be treated so that speech defectives will be helped out of their speech problems and be prevented from becoming obsessed by frustrations resulting from their speech disorders. African Journal of Cross-Cultural Psychology and Sport Facilitation Vol. 6 2004: 91-95 ...

  15. Temporal modulations in speech and music.

    Science.gov (United States)

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Speech and music perception with the new fine structure speech coding strategy: preliminary results.

    Science.gov (United States)

    Arnoldner, Christoph; Riss, Dominik; Brunner, Markus; Durisin, Martin; Baumgartner, Wolf-Dieter; Hamzavi, Jafar-Sasan

    2007-12-01

    Taking into account the excellent results with significant improvements in the speech tests and the very high satisfaction of the patients using the new strategy, this first implementation of a fine structure strategy could offer a new quality of hearing with cochlear implants (CIs). This study consisted of an intra-individual comparison of speech recognition, music perception and patient preference when subjects used two different speech coding strategies with a MedEl Pulsar CI: continuous interleaved sampling (CIS) and the new fine structure processing (FSP) strategy. In contrast to envelope-based strategies, the FSP strategy also delivers subtle pitch and timing differences of sound to the user and is thereby supposed to enhance speech perception in noise and increase the quality of music perception. This was a prospective study assessing performance with two different speech coding strategies. The setting was a CI programme at an academic tertiary referral centre. Fourteen post-lingually deaf patients using a MedEl Pulsar CI with a mean CI experience of 0.98 years were supplied with the new FSP speech coding strategy. Subjects consecutively used the two different speech coding strategies. Speech and music tests were performed with the previously fitted CIS strategy, immediately after fitting with the new FSP strategy and 4, 8 and 12 weeks later. The main outcome measures were individual performance and subjective assessment of two different speech processors. Speech and music test scores improved statistically significantly after conversion from CIS to FSP strategy. Twelve of 14 patients preferred the new FSP speech processing strategy over the CIS strategy.

  17. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  18. Automatic Smoker Detection from Telephone Speech Signals

    DEFF Research Database (Denmark)

    Alavijeh, Amir Hossein Poorjam; Hesaraki, Soheila; Safavi, Saeid

    2017-01-01

    This paper proposes an automatic smoking habit detection from spontaneous telephone speech signals. In this method, each utterance is modeled using i-vector and non-negative factor analysis (NFA) frameworks, which yield low-dimensional representation of utterances by applying factor analysis...... on Gaussian mixture model means and weights respectively. Each framework is evaluated using different classification algorithms to detect the smoker speakers. Finally, score-level fusion of the i-vector-based and the NFA-based recognizers is considered to improve the classification accuracy. The proposed...... method is evaluated on telephone speech signals of speakers whose smoking habits are known drawn from the National Institute of Standards and Technology (NIST) 2008 and 2010 Speaker Recognition Evaluation databases. Experimental results over 1194 utterances show the effectiveness of the proposed approach...

  19. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  20. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  1. Methodology for Speech Assessment in the Scandcleft Project-An International Randomized Clinical Trial on Palatal Surgery

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth

    2009-01-01

    Objective: To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Design: Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcom...

  2. Speech Enhancement Based on Compressed Sensing Technology

    Directory of Open Access Journals (Sweden)

    Huiyan Xu

    2014-10-01

    Full Text Available Compressed sensing (CS is a sampled approach on signal sparsity-base, and it can effectively extract the information which is contained in the signal. This paper presents a noisy speech enhancement new method based on CS process. Algorithm uses a voice sparsity in the discrete fast Fourier transform (Fast Fourier transform, FFT, and complex domain observation matrix is designed, and the noisy speech compression measurement and de-noising are made by soft threshold, and the speech signal is sparsely reconstructed by separable approximation (Sparse Reconstruction by Separable Approximation, SpaRSA algorithm to restore, speech enhancement is improved. Experimental results show that the denoising compression reconstruction of the noisy signal is done in the algorithm, SNR margin is improved greatly, and the background noise can been more effectively suppressed.

  3. Describing Speech Usage in Daily Activities in Typical Adults.

    Science.gov (United States)

    Anderson, Laine; Baylor, Carolyn R; Eadie, Tanya L; Yorkston, Kathryn M

    2016-01-01

    "Speech usage" refers to what people want or need to do with their speech to meet communication demands in life roles. The purpose of this study was to contribute to validation of the Levels of Speech Usage scale by providing descriptive data from a sample of adults without communication disorders, comparing this scale to a published Occupational Voice Demands scale and examining predictors of speech usage levels. This is a survey design. Adults aged ≥25 years without reported communication disorders were recruited nationally to complete an online questionnaire. The questionnaire included the Levels of Speech Usage scale, questions about relevant occupational and nonoccupational activities (eg, socializing, hobbies, childcare, and so forth), and demographic information. Participants were also categorized according to Koufman and Isaacson occupational voice demands scale. A total of 276 participants completed the questionnaires. People who worked for pay tended to report higher levels of speech usage than those who do not work for pay. Regression analyses showed employment to be the major contributor to speech usage; however, considerable variance left unaccounted for suggests that determinants of speech usage and the relationship between speech usage, employment, and other life activities are not yet fully defined. The Levels of Speech Usage may be a viable instrument to systematically rate speech usage because it captures both occupational and nonoccupational speech demands. These data from a sample of typical adults may provide a reference to help in interpreting the impact of communication disorders on speech usage patterns. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Listeners' Perceptions of Speech and Language Disorders

    Science.gov (United States)

    Allard, Emily R.; Williams, Dale F.

    2008-01-01

    Using semantic differential scales with nine trait pairs, 445 adults rated five audio-taped speech samples, one depicting an individual without a disorder and four portraying communication disorders. Statistical analyses indicated that the no disorder sample was rated higher with respect to the trait of employability than were the articulation,…

  5. Advertising and Free Speech.

    Science.gov (United States)

    Hyman, Allen, Ed.; Johnson, M. Bruce, Ed.

    The articles collected in this book originated at a conference at which legal and economic scholars discussed the issue of First Amendment protection for commercial speech. The first article, in arguing for freedom for commercial speech, finds inconsistent and untenable the arguments of those who advocate freedom from regulation for political…

  6. Physics and Speech Therapy.

    Science.gov (United States)

    Duckworth, M.; Lowe, T. L.

    1986-01-01

    Describes development and content of a speech science course taught to speech therapists for two years, modified by feedback from those two classes. Presents basic topics and concepts covered. Evaluates a team teaching approach as well as the efficacy of teaching physics relevant to vocational interests. (JM)

  7. Illustrated Speech Anatomy.

    Science.gov (United States)

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  8. Speech Quality Measurement

    Science.gov (United States)

    1978-05-01

    2.271 Sound Patterns of English, N. Chomsky and H. Halle, Haper zi Row, New York, 1968. 12.281 "Speech Synthesis by Rule," J. N. Holmes, I. G...L. H. Nakatani, B. J. McDermott, "Effect of Pitch and Formant Manipulations on Speech Quality," Bell Telephone Laboratories, Technical Memorandum, 72

  9. Speech and Language Impairments

    Science.gov (United States)

    ... grade and has recently been diagnosed with childhood apraxia of speech—or CAS. CAS is a speech disorder marked ... 800.242.5338 | http://www.cleftline.org Childhood Apraxia of Speech Association of North America | CASANA http://www.apraxia- ...

  10. Free Speech. No. 38.

    Science.gov (United States)

    Kane, Peter E., Ed.

    This issue of "Free Speech" contains the following articles: "Daniel Schoor Relieved of Reporting Duties" by Laurence Stern, "The Sellout at CBS" by Michael Harrington, "Defending Dan Schorr" by Tome Wicker, "Speech to the Washington Press Club, February 25, 1976" by Daniel Schorr, "Funds…

  11. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  12. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  13. An optimal speech processor for efficient human speech ...

    Indian Academy of Sciences (India)

    Our experimental findings suggest that the auditory filterbank in human ear functions as a near-optimal speech processor for achieving efficient speech communication between humans. Keywords. Human speech communication; articulatory gestures; auditory filterbank; mutual information. 1. Introduction. Speech is one of ...

  14. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  15. SPEECH ACT OF ILTIFAT AND ITS INDONESIAN TRANSLATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Zaka Al Farisi

    2015-01-01

    Full Text Available Abstract: Iltifat (shifting speech act is distinctive and considered unique style of Arabic. It has potential errors when it is translated into Indonesian. Therefore, translation of iltifat speech act into another language can be an important issue. The objective of the study is to know translation procedures/techniques and ideology required in dealing with iltifat speech act. This research is directed at translation as a cognitive product of a translator. The data used in the present study were the corpus of Koranic verses that contain iltifat speech act along with their translation. Data analysis typically used descriptive-evaluative method with content analysis model. The data source of this research consisted of the Koran and its translation. The purposive sampling technique was employed, with the sample of the iltifat speech act contained in the Koran. The results showed that more than 60% of iltifat speech act were translated by using literal procedure. The significant number of literal translation of the verses asserts that the Ministry of Religious Affairs tended to use literal method of translation. In other words, the Koran translation made by the Ministry of Religious Affairs tended to be oriented to the source language in dealing with iltifat speech act. The number of the literal procedure used shows a tendency of foreignization ideology. Transitional pronouns contained in the iltifat speech act can be clearly translated when thick translations were used in the form of description in parentheses. In this case, explanation can be a choice in translating iltifat speech act.

  16. Crows spontaneously exhibit analogical reasoning.

    Science.gov (United States)

    Smirnova, Anna; Zorina, Zoya; Obozova, Tanya; Wasserman, Edward

    2015-01-19

    Analogical reasoning is vital to advanced cognition and behavioral adaptation. Many theorists deem analogical thinking to be uniquely human and to be foundational to categorization, creative problem solving, and scientific discovery. Comparative psychologists have long been interested in the species generality of analogical reasoning, but they initially found it difficult to obtain empirical support for such thinking in nonhuman animals (for pioneering efforts, see [2, 3]). Researchers have since mustered considerable evidence and argument that relational matching-to-sample (RMTS) effectively captures the essence of analogy, in which the relevant logical arguments are presented visually. In RMTS, choice of test pair BB would be correct if the sample pair were AA, whereas choice of test pair EF would be correct if the sample pair were CD. Critically, no items in the correct test pair physically match items in the sample pair, thus demanding that only relational sameness or differentness is available to support accurate choice responding. Initial evidence suggested that only humans and apes can successfully learn RMTS with pairs of sample and test items; however, monkeys have subsequently done so. Here, we report that crows too exhibit relational matching behavior. Even more importantly, crows spontaneously display relational responding without ever having been trained on RMTS; they had only been trained on identity matching-to-sample (IMTS). Such robust and uninstructed relational matching behavior represents the most convincing evidence yet of analogical reasoning in a nonprimate species, as apes alone have spontaneously exhibited RMTS behavior after only IMTS training. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A characterization of verb use in Turkish agrammatic narrative speech

    NARCIS (Netherlands)

    Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien

    2016-01-01

    This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large

  18. Sound frequency affects speech emotion perception: results from congenital amusia.

    Science.gov (United States)

    Lolli, Sydney L; Lewenstein, Ari D; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

  19. Speech Prosody in Persian Language

    Directory of Open Access Journals (Sweden)

    Maryam Nikravesh

    2014-05-01

    Full Text Available Background: In verbal communication in addition of semantic and grammatical aspects, includes: vocabulary, syntax and phoneme, some special voice characteristics were use that called speech prosody. Speech prosody is one of the important factors of communication which includes: intonation, duration, pitch, loudness, stress, rhythm and etc. The aim of this survey is studying some factors of prosody as duration, fundamental frequency range and intonation contour. Materials and Methods: This study is performed with cross-sectional and descriptive-analytic approach. The participants include 134 male and female between 18-30 years old who normally speak Persian. Two sentences include: an interrogative and one declarative sentence were studied. Voice samples were analyzed by Dr. Speech software (real analysis software and data were analyzed by statistical test of unilateral variance analysis and in depended T test, and intonation contour was drawn for sentences. Results: Mean of duration between kinds of sentences had a significant difference. Mean of duration had significant difference between female and male. Fundamental frequency range between kinds of sentences had not significant difference. Fundamental frequency range in female is higher than male. Conclusion: Duration is an affective factor in Persian prosody. The higher fundamental frequency range in female is because of different anatomical and physiological mechanisms in phonation system. In addition higher fundamental frequency range in female is the result of an authority of language use in Farsi female. The end part of intonation contour in yes/no question is rising, in declarative sentence is falling.

  20. Spontaneous Atraumatic Mediastinal Hemorrhage

    Directory of Open Access Journals (Sweden)

    Morkos Iskander BSc, BMBS, MRCS, PGCertMedEd

    2013-04-01

    Full Text Available Spontaneous atraumatic mediastinal hematomas are rare. We present a case of a previously fit and well middle-aged lady who presented with acute breathlessness and an increasing neck swelling and spontaneous neck bruising. On plain chest radiograph, widening of the mediastinum was noted. The bruising was later confirmed to be secondary to mediastinal hematoma. This life-threatening diagnostic conundrum was managed conservatively with a multidisciplinary team approach involving upper gastrointestinal and thoracic surgeons, gastroenterologists, radiologists, intensivists, and hematologists along with a variety of diagnostic modalities. A review of literature is also presented to help surgeons manage such challenging and complicated cases.

  1. Depressive disorder and grief following spontaneous abortion.

    Science.gov (United States)

    Kulathilaka, Susil; Hanwella, Raveen; de Silva, Varuni A

    2016-04-12

    Abortion is associated with moderate to high risk of psychological problems such as depression, use of alcohol or marijuana, anxiety, depression and suicidal behaviours. The increased risk of depression after spontaneous abortion in Asian populations has not been clearly established. Only a few studies have explored the relationship between grief and depression after abortion. A study was conducted to assess the prevalence and risk factors of depressive disorder and complicated grief among women 6-10 weeks after spontaneous abortion and compare the risk of depression with pregnant women attending an antenatal clinic. Spontaneous abortion group consisted of women diagnosed with spontaneous abortion by a Consultant Obstetrician. Women with confirmed or suspected induced abortion were excluded. The comparison group consisted of randomly selected pregnant, females attending the antenatal clinics of the two hospitals. Diagnosis of depressive disorder was made according to ICD-10 clinical criteria based on a structured clinical interview. This assessment was conducted in both groups. The severity of depressive symptoms were assessed using the Patients Health Questionnaire (PHQ-9). Grief was assessed using the Perinatal Grief Scale which was administered to the women who had experienced spontaneous abortion. The sample consisted of 137 women in each group. The spontaneous abortion group (mean age 30.39 years (SD = 6.38) were significantly older than the comparison group (mean age 28.79 years (SD = 6.26)). There were more females with ≥10 years of education in the spontaneous abortion group (n = 54; SD = 39.4) compared to the comparison group (n = 37; SD = 27.0). The prevalence of depression in the spontaneous abortion group was 18.6 % (95 CI, 11.51-25.77). The prevalence of depression in the comparison group was 9.5 % (95 CI, 4.52-14.46). Of the 64 women fulfilling criteria for grief, 17 (26.6 %) also fulfilled criteria for a depressive episode. The relative risk of

  2. Confiabilidade das transcrições fonológicas de crianças com alteração específica de linguagem Reliability of phonological transcriptions of speech samples produced by language-impared children

    Directory of Open Access Journals (Sweden)

    Debora Maria Befi-Lopes

    2010-12-01

    Full Text Available OBJETIVOS: Analisar a confiabilidade das transcrições fonológicas de crianças com Alteração Específica de Linguagem (AEL, e verificar se há diferença entre a confiabilidade das tarefas das crianças que eram capazes de realizar discurso na época da coleta da fonologia e daquelas que ainda não possuíam essa habilidade. MÉTODOS: Tarefas de nomeação de figuras e imitação de vocábulos de 37 crianças com AEL, de três a cinco anos, previamente coletadas e analisadas, foram transcritas pela segunda vez. Posteriormente, as pesquisadoras tiveram acesso às primeiras transcrições para realizar o cálculo de confiabilidade. Para as tarefas cujo índice de discordância foi superior a 20%, foi realizada uma terceira transcrição. Verificamos também que crianças eram capazes de realizar discurso na época da coleta da prova de fonologia. RESULTADOS: Para ambas as tarefas, houve predomínio de índice de concordância inferior a 80% (pPURPOSE: To analyze the reliability level of phonological transcriptions of speech samples produced by children with Language Impairment (LI, and to verify whether there was significant disagreement between the transcriptions, by comparing subjects who were able to produce discourse and those who were not. METHODS: Speech samples of 37 three- to five-year-old subjects with LI, previously collected and analyzed using two tasks (picture naming and repetition of words, were re-transcribed. Subsequently, the researchers accessed the first transcriptions, in order to calculate the agreement level. Transcriptions whose disagreement index was higher than 20% were transcribed for the third time. The ability to produce discourse at the time of data collection was also considered in the analysis. RESULTS: For both tasks, there was a predominance of agreement lower than 80% (p<0.001 when the first two transcriptions were taken into account. Meanwhile, the agreement between the first, the second and the third

  3. Speech Communication and Signal Processing

    Indian Academy of Sciences (India)

    on 'Auditory-like filter bank: An optimal speech processor for efficient human speech commu- nication', Ghosh et al argue that the auditory filter bank in human ear is a near-optimal speech processor for efficient speech communication between human beings. They use mutual informa- tion criterion to design the optimal filter ...

  4. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  5. Speech processing in mobile environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book focuses on speech processing in the presence of low-bit rate coding and varying background environments. The methods presented in the book exploit the speech events which are robust in noisy environments. Accurate estimation of these crucial events will be useful for carrying out various speech tasks such as speech recognition, speaker recognition and speech rate modification in mobile environments. The authors provide insights into designing and developing robust methods to process the speech in mobile environments. Covering temporal and spectral enhancement methods to minimize the effect of noise and examining methods and models on speech and speaker recognition applications in mobile environments.

  6. Speech Abilities in Preschool Children with Speech Sound Disorder with and without Co-Occurring Language Impairment

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A.

    2014-01-01

    Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…

  7. Prevalence of Speech Disorders in Arak Primary School Students, 2014-2015

    Directory of Open Access Journals (Sweden)

    Abdoreza Yavari

    2016-09-01

    Full Text Available Abstract Background: The speech disorders may produce irreparable damage to childs speech and language development in the psychosocial view. The voice, speech sound production and fluency disorders are speech disorders, that may result from delay or impairment in speech motor control mechanism, central neuron system disorders, improper language stimulation or voice abuse. Materials and Methods: This study examined the prevalence of speech disorders in 1393 Arakian students at 1 to 6th grades of primary school. After collecting continuous speech samples, picture description, passage reading and phonetic test, we recorded the pathological signs of stuttering, articulation disorder and voice disorders in a special sheet. Results: The prevalence of articulation, voice and stuttering disorders was 8%, 3.5% and%1 and the prevalence of speech disorders was 11.9%. The prevalence of speech disorders was decreasing with increasing of student’s grade. 12.2% of boy students and 11.7% of girl students of primary school in Arak had speech disorders. Conclusion: The prevalence of speech disorders of primary school students in Arak is similar to the prevalence of speech disorders in Kermanshah, but the prevalence of speech disorders in this research is smaller than many similar researches in Iran. It seems that racial and cultural diversity has some effect on increasing the prevalence of speech disorders in Arak city.

  8. Investigation of Preservice Teachers' Speech Anxiety with Different Points of View

    Science.gov (United States)

    Kana, Fatih

    2015-01-01

    The purpose of this study is to find out the level of speech anxiety of last year students at Education Faculties and the effects of speech anxiety. For this purpose, speech anxiety inventory was delivered to 540 pre-service teachers at 2013-2014 academic year using stratified sampling method. Relational screening model was used in the study. To…

  9. The Suitability of Cloud-Based Speech Recognition Engines for Language Learning

    Science.gov (United States)

    Daniels, Paul; Iwago, Koji

    2017-01-01

    As online automatic speech recognition (ASR) engines become more accurate and more widely implemented with call software, it becomes important to evaluate the effectiveness and the accuracy of these recognition engines using authentic speech samples. This study investigates two of the most prominent cloud-based speech recognition engines--Apple's…

  10. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2011-01-01

    In a sample of 46 children aged 4-7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants' speech, prosody, and voice were compared with data from 40 typically-developing children, 13…

  11. Vowel Patterns in Developmental Apraxia of Speech: Three Longitudinal Case Studies

    Science.gov (United States)

    Davis, Barbara L.; Jacks, Adam; Marquardt, Thomas P.

    2005-01-01

    Vowel inventories and error patterns for three children with suspected developmental apraxia of speech (DAS) were analysed over a 3-year period using phonetic transcriptions of connected speech samples. The children demonstrated complete English vowel inventories except for rhotics. However, accuracy of vowel targets in connected speech did not…

  12. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    Science.gov (United States)

    2016-11-28

    Title: Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline Christopher J. Smalt...to utilize computational models of the auditory periphery and auditory cortex to study the effect of low spontaneous rate ANF loss on the cortical...representation of speech intelligibility in noise. The auditory-periphery model of Zilany et al. (JASA 2009,2014) is used to make predictions of

  13. Spontaneous Appendicocutaneous Fistula I

    African Journals Online (AJOL)

    M T0k0de* MB, BS and. Dr 0. A. AWOj0bi+ FMCS (Nig). ABSTRACT. Ruptured appendicitis is not a common cause of spontaneous enterocutaneous fistula. A case of ruptured retrocaecal appendicitis presenting as an enterocutaneous fistula in a Nigerian woman is presented. The literature on this disorder is also reviewed.

  14. Spontaneous Grammar Explanations.

    Science.gov (United States)

    Tjoo, Hong Sing; Lewis, Marilyn

    1998-01-01

    Describes one New Zealand university language teacher's reflection on her own grammar explanations to university-level students of Bahasa Indonesian. Examines form-focused instruction through the teacher's spontaneous answers to students' questions about the form of the language they are studying. The teacher's experiences show that it takes time…

  15. EDITORIAL SPONTANEOUS BACTERIAL PERITONITIS ...

    African Journals Online (AJOL)

    hi-tech

    Spontaneous bacterial peritonitis (SBP) frequent]y occurs in patients with liver cirrhosis and ascites. It is defined as an infection of previously sterile ascitic fluid without any demonstrable intrabdominal source of infection. It is now internationally agreed that a polymorphonuclear (PMN) cell count in the ascitic fluid of over 250 ...

  16. Spontaneous dimensional reduction?

    Science.gov (United States)

    Carlip, Steven

    2012-10-01

    Over the past few years, evidence has begun to accumulate suggesting that spacetime may undergo a "spontaneous dimensional reduction" to two dimensions near the Planck scale. I review some of this evidence, and discuss the (still very speculative) proposal that the underlying mechanism may be related to short-distance focusing of light rays by quantum fluctuations.

  17. Speech and Swallowing

    Science.gov (United States)

    ... Hallucinations/Delusions Pain Skeletal & Bone Health Skin Changes Sleep Disorders Small Handwriting Speech & Swallowing Problems Urinary Incontinence Vision Changes Weight Management Diagnosis Treatment Help Us Make a Difference We need your ...

  18. Anxiety and ritualized speech

    Science.gov (United States)

    Lalljee, Mansur; Cook, Mark

    1975-01-01

    The experiment examines the effects on a number of words that seem irrelevant to semantic communication. The Units of Ritualized Speech (URSs) considered are: 'I mean', 'in fact', 'really', 'sort of', 'well' and 'you know'. (Editor)

  19. Speech impairment (adult)

    Science.gov (United States)

    ... Elsevier; 2016:chap 13. Kirshner HS. Dysarthria and apraxia of speech. In: Daroff RB, Jankovic J, Mazziotta JC, Pomeroy SL, eds. Bradley's Neurology in Clinical Practice . 7th ed. Philadelphia, PA: Elsevier; 2016: ...

  20. Trainable Videorealistic Speech Animation

    National Research Council Canada - National Science Library

    Ezzat, Tony F

    2002-01-01

    .... After processing the corpus automatically, a visual speech module is learned from the data that is capable of synthesizing the human subject's mouth littering entirely novel utterances that were not...

  1. Speech perception as categorization.

    Science.gov (United States)

    Holt, Lori L; Lotto, Andrew J

    2010-07-01

    Speech perception (SP) most commonly refers to the perceptual mapping from the highly variable acoustic speech signal to a linguistic representation, whether it be phonemes, diphones, syllables, or words. This is an example of categorization, in that potentially discriminable speech sounds are assigned to functionally equivalent classes. In this tutorial, we present some of the main challenges to our understanding of the categorization of speech sounds and the conceptualization of SP that has resulted from these challenges. We focus here on issues and experiments that define open research questions relevant to phoneme categorization, arguing that SP is best understood as perceptual categorization, a position that places SP in direct contact with research from other areas of perception and cognition.

  2. Neurolinguistic features of spontaneous language production dissociate three forms of neurodegenerative disease: Alzheimer's, Huntington's, and Parkinson's.

    Science.gov (United States)

    Illes, J

    1989-11-01

    An analysis of the temporal (prospective) form (silent and filled hesitations, repetitions, incomplete phrases, context-related comments, interjections), syntactic form, and lexical (retrospective) form (verbal deviations, open and closed class phrases) of spontaneous language production of early and middle stage Alzheimer's, Huntington's, and Parkinson's patients was made. Results showed that the language structure was disrupted in each disease, but in different ways. Temporal interruptions of varying types were frequent in the language of Alzheimer's and Huntington's Disease patients; only long-duration silent hesitations were frequent in Parkinson's language samples. Syntactic complexity was reduced in Huntington's Disease. Verbal paraphasias were found in both the language of Alzheimer's patients, as well as moderately advanced Huntington's patients. Closed class phrases were predominant in the language of Alzheimer's patients and Huntington's patients, and open class phrases in the language of Parkinson's patients. Taken together, the results suggest that (1) there is a unique neurolinguistic profile for spontaneous language production for each neurodegenerative disease, (2) pathology of the neostriatum disrupts syntactic organization, (3) adaptive strategies are used to cope with verbal and speech-motor difficulties, and (4) adaptive strategies fail to be effective with increasing disease severity.

  3. Automated acoustic analysis in detection of spontaneous swallows in Parkinson's disease.

    Science.gov (United States)

    Golabbakhsh, Marzieh; Rajaei, Ali; Derakhshan, Mahmoud; Sadri, Saeed; Taheri, Masoud; Adibi, Peyman

    2014-10-01

    Acoustic monitoring of swallow frequency has become important as the frequency of spontaneous swallowing can be an index for dysphagia and related complications. In addition, it can be employed as an objective quantification of ingestive behavior. Commonly, swallowing complications are manually detected using videofluoroscopy recordings, which require expensive equipment and exposure to radiation. In this study, a noninvasive automated technique is proposed that uses breath and swallowing recordings obtained via a microphone located over the laryngopharynx. Nonlinear diffusion filters were used in which a scale-space decomposition of recorded sound at different levels extract swallows from breath sounds and artifacts. This technique was compared to manual detection of swallows using acoustic signals on a sample of 34 subjects with Parkinson's disease. A speech language pathologist identified five subjects who showed aspiration during the videofluoroscopic swallowing study. The proposed automated method identified swallows with a sensitivity of 86.67 %, a specificity of 77.50 %, and an accuracy of 82.35 %. These results indicate the validity of automated acoustic recognition of swallowing as a fast and efficient approach to objectively estimate spontaneous swallow frequency.

  4. Charisma in business speeches

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Brem, Alexander; Novák-Tót, Eszter

    2016-01-01

    to business speeches. Consistent with the public opinion, our findings are indicative of Steve Jobs being a more charismatic speaker than Mark Zuckerberg. Beyond previous studies, our data suggest that rhythm and emphatic accentuation are also involved in conveying charisma. Furthermore, the differences...... between Steve Jobs and Mark Zuckerberg and the investor- and customer-related sections of their speeches support the modern understanding of charisma as a gradual, multiparametric, and context-sensitive concept....

  5. Speech spectrum envelope modeling

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Vondra, Martin

    Vol. 4775, - (2007), s. 129-137 ISSN 0302-9743. [COST Action 2102 International Workshop. Vietri sul Mare, 29.03.2007-31.03.2007] R&D Projects: GA AV ČR(CZ) 1ET301710509 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech * speech processing * cepstral analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.302, year: 2005

  6. Recovering With Acquired Apraxia of Speech: The First 2 Years.

    Science.gov (United States)

    Haley, Katarina L; Shafer, Jennifer N; Harmon, Tyson G; Jacks, Adam

    2016-12-01

    This study was intended to document speech recovery for 1 person with acquired apraxia of speech quantitatively and on the basis of her lived experience. The second author sustained a traumatic brain injury that resulted in acquired apraxia of speech. Over a 2-year period, she documented her recovery through 22 video-recorded monologues. We analyzed these monologues using a combination of auditory perceptual, acoustic, and qualitative methods. Recovery was evident for all quantitative variables examined. For speech sound production, the recovery was most prominent during the first 3 months, but slower improvement was evident for many months. Measures of speaking rate, fluency, and prosody changed more gradually throughout the entire period. A qualitative analysis of topics addressed in the monologues was consistent with the quantitative speech recovery and indicated a subjective dynamic relationship between accuracy and rate, an observation that several factors made speech sound production variable, and a persisting need for cognitive effort while speaking. Speech features improved over an extended time, but the recovery trajectories differed, indicating dynamic reorganization of the underlying speech production system. The relationship among speech dimensions should be examined in other cases and in population samples. The combination of quantitative and qualitative analysis methods offers advantages for understanding clinically relevant aspects of recovery.

  7. Sound frequency affects speech emotion perception: Results from congenital amusia

    Directory of Open Access Journals (Sweden)

    Sydney eLolli

    2015-09-01

    Full Text Available Congenital amusics, or tone-deaf individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying band-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody (MBEP were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under band-pass and unfiltered speech conditions. Results showed a significant correlation between pitch discrimination threshold and emotion identification accuracy for band-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold > 16 Hz performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between band-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation.

  8. Methodology for Speech Assessment in the Scandcleft Project-An International Randomized Clinical Trial on Palatal Surgery

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth

    2009-01-01

    Objective: To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Design: Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcom...... conventions and rules are important. A composite variable for perceptual assessment of velopharyngeal function during speech seems usable; whereas, the method for hypernasality evaluation requires further testing.  ......Objective: To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Design: Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcomes...... across five included languages were developed and tested. Participants and Materials: Randomly selected video recordings of 10 5-year-old children from each language (n = 50) were included in the project. Speech material consisted of test consonants in single words, connected speech, and syllable chains...

  9. Baby Sign but Not Spontaneous Gesture Predicts Later Vocabulary in Children with Down Syndrome

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Bailey, Jhonelle; Schmuck, Lauren

    2016-01-01

    Early spontaneous gesture, specifically deictic gesture, predicts subsequent vocabulary development in typically developing (TD) children. Here, we ask whether deictic gesture plays a similar role in predicting later vocabulary size in children with Down Syndrome (DS), who have been shown to have difficulties in speech production, but strengths in…

  10. A grammatical analysis of the spontaneous L2 English use of ...

    African Journals Online (AJOL)

    ... assessment tool for the grammatical analysis of the spontaneous L2 speech of four schizophrenics and four (non-psychotic) controls who were matched to the schizophrenics in terms of age, gender and first language (L1) and L2 dialects. Following a comparison of the types and frequency of the two groups' phonological, ...

  11. Memory for speech and speech for memory.

    Science.gov (United States)

    Locke, J L; Kutz, K J

    1975-03-01

    Thirty kindergarteners, 15 who substituted /w/ for /r/ and 15 with correct articulation, received two perception tests and a memory test that included /w/ and /r/ in minimally contrastive syllables. Although both groups had nearly perfect perception of the experimenter's productions of /w/ and /r/, misarticulating subjects perceived their own tape-recorded w/r productions as /w/. In the memory task these same misarticulating subjects committed significantly more /w/-/r/ confusions in unspoken recall. The discussion considers why people subvocally rehearse; a developmental period in which children do not rehearse; ways subvocalization may aid recall, including motor and acoustic encoding; an echoic store that provides additional recall support if subjects rehearse vocally, and perception of self- and other- produced phonemes by misarticulating children-including its relevance to a motor theory of perception. Evidence is presented that speech for memory can be sufficiently impaired to cause memory disorder. Conceptions that restrict speech disorder to an impairment of communication are challenged.

  12. Can mergers-in-progress be unmerged in speech accommodation?

    Science.gov (United States)

    Babel, Molly; McAuliffe, Michael; Haber, Graham

    2013-01-01

    This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged.

  13. Can mergers-in-progress be unmerged in speech accommodation?

    Directory of Open Access Journals (Sweden)

    Molly eBabel

    2013-09-01

    Full Text Available This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged.

  14. Emotion Estimation in Speech Using a 3D Emotion Space Concept

    OpenAIRE

    Grimm, Michael; Kroschel, Kristian

    2007-01-01

    In this chapter we discussed the recognition of emotions in spontaneous speech. We used a general framework motivated by emotion psychology to describe emotions by means of three emotion "primitives" (attributes), namely valence, activation, and dominance. With these emotion primitives, we proposed a real-valued three-dimensional emotion space concept to overcome the limitations in the state-of-the-art emotion categorization. We tested the method on the basis of 893 spontaneous emotional utte...

  15. Spontaneous healing of spontaneous coronary artery dissection.

    Science.gov (United States)

    Almafragi, Amar; Convens, Carl; Heuvel, Paul Van Den

    2010-01-01

    Spontaneous coronary artery dissection (SCAD) is a rare cause of acute coronary syndrome and sudden cardiac death. It should be suspected in every healthy young woman without cardiac risk factors, especially during the peripartum or postpartum periods. It is important to check for a history of drug abuse, collagen vascular disease or blunt trauma of the chest. Coronary angiography is essential for diagnosis and early management. We wonder whether thrombolysis might aggravate coronary dissection. All types of treatment (medical therapy, percutaneous intervention or surgery) improve the prognosis without affecting survival times if used appropriately according to the clinical stability and the angiographic features of the involved coronary arteries. Prompt recognition and targeted treatment improve outcomes. We report a case of SCAD in a young female free of traditional cardiovascular risk factors, who presented six hours after thrombolysis for ST elevation myocardial infarction. Coronary angiography showed a dissection of the left anterior descending and immediate branch. She had successful coronary artery bypass grafting, with complete healing of left anterior descending dissection.

  16. Verb Argument Structure in Narrative Speech: Mining AphasiaBank.

    Science.gov (United States)

    Malyutina, Svetlana; Richardson, Jessica D; den Ouden, Dirk B

    2016-02-01

    Previous research has found that verb argument structure characteristics (such as the number of participant roles in the situation described by the verb) can facilitate or hinder aphasic language production and comprehension in constrained laboratory tasks. This research needs to be complemented by studies of narrative or unrestricted speech, which can capture the spontaneous selection of verbs and grammatical structures by people with aphasia and may be particularly sensitive to the relative cost of access to different verb types in more natural conditions. Focusing on the number of subcategorization options, we investigated verb argument structure effects in a large sample of narratives from AphasiaBank, by speakers with aphasia, as well as control speakers without brain damage. Verb argument structure complexity did not negatively affect verb selection in any type of aphasia. However, people with aphasia, particularly with Broca's aphasia, used verbs in less complex and diverse ways, with fewer arguments and less diverse subcategorization options. In line with previous research, this suggests that deficits in verb use in aphasia are likely due to difficulties with the online application of or partial damage to verb argument structure knowledge. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Spontaneous regression of intracranial malignant lymphoma

    International Nuclear Information System (INIS)

    Kojo, Nobuto; Tokutomi, Takashi; Eguchi, Gihachirou; Takagi, Shigeyuki; Matsumoto, Tomie; Sasaguri, Yasuyuki; Shigemori, Minoru.

    1988-01-01

    In a 46-year-old female with a 1-month history of gait and speech disturbances, computed tomography (CT) demonstrated mass lesions of slightly high density in the left basal ganglia and left frontal lobe. The lesions were markedly enhanced by contrast medium. The patient received no specific treatment, but her clinical manifestations gradually abated and the lesions decreased in size. Five months after her initial examination, the lesions were absent on CT scans; only a small area of low density remained. Residual clinical symptoms included mild right hemiparesis and aphasia. After 14 months the patient again deteriorated, and a CT scan revealed mass lesions in the right frontal lobe and the pons. However, no enhancement was observed in the previously affected regions. A biopsy revealed malignant lymphoma. Despite treatment with steroids and radiation, the patient's clinical status progressively worsened and she died 27 months after initial presentation. Seven other cases of spontaneous regression of primary malignant lymphoma have been reported. In this case, the mechanism of the spontaneous regression was not clear, but changes in immunologic status may have been involved. (author)

  18. Computer-based speech therapy for childhood speech sound disorders.

    Science.gov (United States)

    Furlong, Lisa; Erickson, Shane; Morris, Meg E

    2017-07-01

    With the current worldwide workforce shortage of Speech-Language Pathologists, new and innovative ways of delivering therapy to children with speech sound disorders are needed. Computer-based speech therapy may be an effective and viable means of addressing service access issues for children with speech sound disorders. To evaluate the efficacy of computer-based speech therapy programs for children with speech sound disorders. Studies reporting the efficacy of computer-based speech therapy programs were identified via a systematic, computerised database search. Key study characteristics, results, main findings and details of computer-based speech therapy programs were extracted. The methodological quality was evaluated using a structured critical appraisal tool. 14 studies were identified and a total of 11 computer-based speech therapy programs were evaluated. The results showed that computer-based speech therapy is associated with positive clinical changes for some children with speech sound disorders. There is a need for collaborative research between computer engineers and clinicians, particularly during the design and development of computer-based speech therapy programs. Evaluation using rigorous experimental designs is required to understand the benefits of computer-based speech therapy. The reader will be able to 1) discuss how computerbased speech therapy has the potential to improve service access for children with speech sound disorders, 2) explain the ways in which computer-based speech therapy programs may enhance traditional tabletop therapy and 3) compare the features of computer-based speech therapy programs designed for different client populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Spontaneous spinal epidural abscess.

    LENUS (Irish Health Repository)

    Ellanti, P

    2011-10-01

    Spinal epidural abscess is an uncommon entity, the frequency of which is increasing. They occur spontaneously or as a complication of intervention. The classical triad of fever, back pain and neurological symptoms are not always present. High index of suspicion is key to diagnosis. Any delay in diagnosis and treatment can have significant neurological consequences. We present the case of a previously well man with a one month history of back pain resulting from an epidural abscess.

  20. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  1. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, A.; Moses, H. R.

    2016-01-01

    Currently on the International Space Station (ISS) and other space vehicles Caution & Warning (C&W) alerts are represented with various auditory tones that correspond to the type of event. This system relies on the crew's ability to remember what each tone represents in a high stress, high workload environment when responding to the alert. Furthermore, crew receive a year or more in advance of the mission that makes remembering the semantic meaning of the alerts more difficult. The current system works for missions conducted close to Earth where ground operators can assist as needed. On long duration missions, however, they will need to work off-nominal events autonomously. There is evidence that speech alarms may be easier and faster to recognize, especially during an off-nominal event. The Information Presentation Directed Research Project (FY07-FY09) funded by the Human Research Program included several studies investigating C&W alerts. The studies evaluated tone alerts currently in use with NASA flight deck displays along with candidate speech alerts. A follow-on study used four types of speech alerts to investigate how quickly various types of auditory alerts with and without a speech component - either at the beginning or at the end of the tone - can be identified. Even though crew were familiar with the tone alert from training or direct mission experience, alerts starting with a speech component were identified faster than alerts starting with a tone. The current study replicated the results from the previous study in a more rigorous experimental design to determine if the candidate speech alarms are ready for transition to operations or if more research is needed. Four types of alarms (caution, warning, fire, and depressurization) were presented to participants in both tone and speech formats in laboratory settings and later in the Human Exploration Research Analog (HERA). In the laboratory study, the alerts were presented by software and participants were

  2. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    Science.gov (United States)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  3. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  4. Dorsomedial prefontal cortex supports spontaneous thinking per se.

    Science.gov (United States)

    Raij, T T; Riekki, T J J

    2017-06-01

    Spontaneous thinking, an action to produce, consider, integrate, and reason through mental representations, is central to our daily experience and has been suggested to serve crucial adaptive purposes. Such thinking occurs among other experiences during mind wandering that is associated with activation of the default mode network among other brain circuitries. Whether and how such brain activation is linked to the experience of spontaneous thinking per se remains poorly known. We studied 51 healthy subjects using a comprehensive experience-sampling paradigm during 3T functional magnetic resonance imaging. In comparison with fixation, the experiences of spontaneous thinking and spontaneous perception were related to activation of wide-spread brain circuitries, including the cortical midline structures, the anterior cingulate cortex and the visual cortex. In direct comparison of the spontaneous thinking versus spontaneous perception, activation was observed in the anterior dorsomedial prefrontal cortex. Modality congruence of spontaneous-experience-related brain activation was suggested by several findings, including association of the lingual gyrus with visual in comparison with non-verbal-non-visual thinking. In the context of current literature, these findings suggest that the cortical midline structures are involved in the integrative core substrate of spontaneous thinking that is coupled with other brain systems depending on the characteristics of thinking. Furthermore, involvement of the anterior dorsomedial prefrontal cortex suggests the control of high-order abstract functions to characterize spontaneous thinking per se. Hum Brain Mapp 38:3277-3288, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Biomarkers of spontaneous preterm birth

    DEFF Research Database (Denmark)

    Polettini, Jossimara; Cobo, Teresa; Kacerovsky, Marian

    2017-01-01

    predictors of pregnancy outcome. This systematic review was conducted to synthesize the knowledge on PTB biomarkers identified using multiplex analysis. Three electronic databases (PubMed, EMBASE and Web of Science) were searched for studies in any language reporting the use of multiplex assays for maternal......Despite decades of research on risk indicators of spontaneous preterm birth (PTB), reliable biomarkers are still not available to screen or diagnose high-risk pregnancies. Several biomarkers in maternal and fetal compartments have been mechanistically linked to PTB, but none of them are reliable......) followed by MIP-1β, GM-CSF, Eotaxin, and TNF-RI (two studies) were reported more than once in maternal serum. However, results could not be combined due to heterogeneity in type of sample, study population, assay, and analysis methods. By this systematic review, we conclude that multiplex assays...

  6. Cross-linguistic perspectives on speech assessment in cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Henningsson, Gunilla

    2012-01-01

    This chapter deals with cross linguistic perspectives that need to be taken into account when comparing speech assessment and speech outcome obtained from cleft palate speakers of different languages. Firstly, an overview of consonants and vowels vulnerable to the cleft condition is presented. Then......, consequences for assessment of cleft palate speech by native versus non-native speakers of a language are discussed, as well as the use of phonemic versus phonetic transcription in cross linguistic studies. Specific recommendations for the construction of speech samples in cross linguistic studies are given....... Finally, the influence of different languages on some aspects of language acquisition in young children with cleft palate is presented and discussed. Until recently, not much has been written about cross linguistic perspectives when dealing with cleft palate speech. Most literature about assessment...

  7. Histogram Equalization to Model Adaptation for Robust Speech Recognition

    Directory of Open Access Journals (Sweden)

    Suh Youngjoo

    2010-01-01

    Full Text Available We propose a new model adaptation method based on the histogram equalization technique for providing robustness in noisy environments. The trained acoustic mean models of a speech recognizer are adapted into environmentally matched conditions by using the histogram equalization algorithm on a single utterance basis. For more robust speech recognition in the heavily noisy conditions, trained acoustic covariance models are efficiently adapted by the signal-to-noise ratio-dependent linear interpolation between trained covariance models and utterance-level sample covariance models. Speech recognition experiments on both the digit-based Aurora2 task and the large vocabulary-based task showed that the proposed model adaptation approach provides significant performance improvements compared to the baseline speech recognizer trained on the clean speech data.

  8. Histogram Equalization to Model Adaptation for Robust Speech Recognition

    Science.gov (United States)

    Suh, Youngjoo; Kim, Hoirin

    2010-12-01

    We propose a new model adaptation method based on the histogram equalization technique for providing robustness in noisy environments. The trained acoustic mean models of a speech recognizer are adapted into environmentally matched conditions by using the histogram equalization algorithm on a single utterance basis. For more robust speech recognition in the heavily noisy conditions, trained acoustic covariance models are efficiently adapted by the signal-to-noise ratio-dependent linear interpolation between trained covariance models and utterance-level sample covariance models. Speech recognition experiments on both the digit-based Aurora2 task and the large vocabulary-based task showed that the proposed model adaptation approach provides significant performance improvements compared to the baseline speech recognizer trained on the clean speech data.

  9. Language modeling for automatic speech recognition of inflective languages an applications-oriented approach using lexical data

    CERN Document Server

    Donaj, Gregor

    2017-01-01

    This book covers language modeling and automatic speech recognition for inflective languages (e.g. Slavic languages), which represent roughly half of the languages spoken in Europe. These languages do not perform as well as English in speech recognition systems and it is therefore harder to develop an application with sufficient quality for the end user. The authors describe the most important language features for the development of a speech recognition system. This is then presented through the analysis of errors in the system and the development of language models and their inclusion in speech recognition systems, which specifically address the errors that are relevant for targeted applications. The error analysis is done with regard to morphological characteristics of the word in the recognized sentences. The book is oriented towards speech recognition with large vocabularies and continuous and even spontaneous speech. Today such applications work with a rather small number of languages compared to the nu...

  10. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  11. Why Go to Speech Therapy?

    Science.gov (United States)

    ... Language Pathologists Physicians Employers Tweet Why Go To Speech Therapy? Parents of Preschoolers Parents of School-Age ... amount of success to be expected. Choosing a Speech-Language Pathologist The key to success with any ...

  12. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  13. Maria Montessori on Speech Education

    Science.gov (United States)

    Stern, David A.

    1973-01-01

    Montessori's theory of education, as related to speech communication skills learning, is explored for insights into speech and language acquisition, pedagogical procedure for teaching spoken vocabulary, and the educational environment which encourages children's free interaction and confidence in communication. (CH)

  14. Speech spectrogram expert

    Energy Technology Data Exchange (ETDEWEB)

    Johannsen, J.; Macallister, J.; Michalek, T.; Ross, S.

    1983-01-01

    Various authors have pointed out that humans can become quite adept at deriving phonetic transcriptions from speech spectrograms (as good as 90percent accuracy at the phoneme level). The authors describe an expert system which attempts to simulate this performance. The speech spectrogram expert (spex) is actually a society made up of three experts: a 2-dimensional vision expert, an acoustic-phonetic expert, and a phonetics expert. The visual reasoning expert finds important visual features of the spectrogram. The acoustic-phonetic expert reasons about how visual features relates to phonemes, and about how phonemes change visually in different contexts. The phonetics expert reasons about allowable phoneme sequences and transformations, and deduces an english spelling for phoneme strings. The speech spectrogram expert is highly interactive, allowing users to investigate hypotheses and edit rules. 10 references.

  15. RECOGNISING SPEECH ACTS

    Directory of Open Access Journals (Sweden)

    Phyllis Kaburise

    2012-09-01

    Full Text Available Speech Act Theory (SAT, a theory in pragmatics, is an attempt to describe what happens during linguistic interactions. Inherent within SAT is the idea that language forms and intentions are relatively formulaic and that there is a direct correspondence between sentence forms (for example, in terms of structure and lexicon and the function or meaning of an utterance. The contention offered in this paper is that when such a correspondence does not exist, as in indirect speech utterances, this creates challenges for English second language speakers and may result in miscommunication. This arises because indirect speech acts allow speakers to employ various pragmatic devices such as inference, implicature, presuppositions and context clues to transmit their messages. Such devices, operating within the non-literal level of language competence, may pose challenges for ESL learners.

  16. Indonesian Automatic Speech Recognition For Command Speech Controller Multimedia Player

    Directory of Open Access Journals (Sweden)

    Vivien Arief Wardhany

    2014-12-01

    Full Text Available The purpose of multimedia devices development is controlling through voice. Nowdays voice that can be recognized only in English. To overcome the issue, then recognition using Indonesian language model and accousticc model and dictionary. Automatic Speech Recognizier is build using engine CMU Sphinx with modified english language to Indonesian Language database and XBMC used as the multimedia player. The experiment is using 10 volunteers testing items based on 7 commands. The volunteers is classifiedd by the genders, 5 Male & 5 female. 10 samples is taken in each command, continue with each volunteer perform 10 testing command. Each volunteer also have to try all 7 command that already provided. Based on percentage clarification table, the word “Kanan” had the most recognize with percentage 83% while “pilih” is the lowest one. The word which had the most wrong clarification is “kembali” with percentagee 67%, while the word “kanan” is the lowest one. From the result of Recognition Rate by male there are several command such as “Kembali”, “Utama”, “Atas “ and “Bawah” has the low Recognition Rate. Especially for “kembali” cannot be recognized as the command in the female voices but in male voice that command has 4% of RR this is because the command doesn’t have similar word in english near to “kembali” so the system unrecognize the command. Also for the command “Pilih” using the female voice has 80% of RR but for the male voice has only 4% of RR. This problem is mostly because of the different voice characteristic between adult male and female which male has lower voice frequencies (from 85 to 180 Hz than woman (165 to 255 Hz.The result of the experiment showed that each man had different number of recognition rate caused by the difference tone, pronunciation, and speed of speech. For further work needs to be done in order to improving the accouracy of the Indonesian Automatic Speech Recognition system

  17. Speech analysis as an index of alcohol intoxication--the Exxon Valdez accident.

    Science.gov (United States)

    Brenner, M; Cash, J R

    1991-09-01

    As part of its investigation of the EXXON VALDEZ tankship accident and oil spill, the National Transportation Safety Board (NTSB) examined the master's speech for alcohol-related effects. Recorded speech samples were obtained from marine radio communications tapes. The samples were tested for four effects associated with alcohol consumption is available scientific literature: slowed speech, speech errors, misarticulation of difficult sounds ("slurring"), and audible changes in speech quality. It was found that speech immediately before and after the accident displayed large changes of the sort associated with alcohol consumption. These changes were not readily explained by fatigue, psychological stress, drug effects, or medical problems. Speech analysis appears to be a useful technique to provide secondary evidence of alcohol impairment.

  18. Speech identity conversion

    Czech Academy of Sciences Publication Activity Database

    Vondra, Martin; Vích, Robert

    Vol. 3445, - (2005), s. 421-426 ISSN 0302-9743. [International Summer School on Neural Nets "E. R. Caianiello". Course: Nonlinear Speech Modeling and Applications /9./. Vietri sul Mare, 13.09.2004-18.09.2004] R&D Projects: GA ČR(CZ) GA102/04/1097; GA ČR(CZ) GA102/02/0124; GA MŠk(CZ) OC 277.001 Institutional research plan: CEZ:AV0Z2067918 Keywords : speech synthesis * computer science Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.402, year: 2005

  19. The chairman's speech

    International Nuclear Information System (INIS)

    Allen, A.M.

    1986-01-01

    The paper contains a transcript of a speech by the chairman of the UKAEA, to mark the publication of the 1985/6 annual report. The topics discussed in the speech include: the Chernobyl accident and its effect on public attitudes to nuclear power, management and disposal of radioactive waste, the operation of UKAEA as a trading fund, and the UKAEA development programmes. The development programmes include work on the following: fast reactor technology, thermal reactors, reactor safety, health and safety aspects of water cooled reactors, the Joint European Torus, and under-lying research. (U.K.)

  20. Relationship between the stuttering severity index and speech rate

    Directory of Open Access Journals (Sweden)

    Claudia Regina Furquim de Andrade

    Full Text Available CONTEXT: The speech rate is one of the parameters considered when investigating speech fluency and is an important variable in the assessment of individuals with communication complaints. OBJECTIVE: To correlate the stuttering severity index with one of the indices used for assessing fluency/speech rate. DESIGN: Cross-sectional study. SETTING: Fluency and Fluency Disorders Investigation Laboratory, Faculdade de Medicina da Universidade de São Paulo. PARTICIPANTS: Seventy adults with stuttering diagnosis. MAIN MEASUREMENTS: A speech sample from each participant containing at least 200 fluent syllables was videotaped and analyzed according to a stuttering severity index test and speech rate parameters. RESULTS: The results obtained in this study indicate that the stuttering severity and the speech rate present significant variation, i.e., the more severe the stuttering is, the lower the speech rate in words and syllables per minute. DISCUSSION AND CONCLUSION: The results suggest that speech rate is an important indicator of fluency levels and should be incorporated in the assessment and treatment of stuttering. This study represents a first attempt to identify the possible subtypes of developmental stuttering. DEFINITION: Objective tests that quantify diseases are important in their diagnosis, treatment and prognosis.

  1. Designing speech for a recipient

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    is investigated on three candidates for so-called ‘simplified registers’: speech to children (also called motherese or baby talk), speech to foreigners (also called foreigner talk) and speech to robots. The volume integrates research from various disciplines, such as psychology, sociolinguistics...

  2. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Abstract. Information is carried in changes of a signal. The paper starts with revis- iting Dudley's concept of the carrier nature of speech. It points to its close connection to modulation spectra of speech and argues against short-term spectral envelopes as dominant carriers of the linguistic information in speech. The history of ...

  3. Speech Communication and Signal Processing

    Indian Academy of Sciences (India)

    Communicating with a machine in a natural mode such as speech brings out not only several technological challenges, but also limitations in our understanding of how people communicate so effortlessly. The key is to understand the distinction between speech processing (as is done in human communication) and speech ...

  4. Spontaneous Thigh Compartment Syndrome

    Directory of Open Access Journals (Sweden)

    Khan, Sameer K

    2011-02-01

    Full Text Available A young man presented with a painful and swollen thigh, without any history of trauma, illness, coagulopathic medication or recent exertional exercise. Preliminary imaging delineated a haematoma in the anterior thigh, without any fractures or muscle trauma. Emergent fasciotomies were performed. No pathology could be identified intra-operatively, or on follow-up imaging. A review of thigh compartment syndromes described in literature is presented in a table. Emergency physicians and traumatologists should be cognisant of spontaneous atraumatic presentations of thigh compartment syndrome, to ensure prompt referral and definitive management of this limb-threatening condition. [West J Emerg Med. 2011;12(1:134-138].

  5. Spotting social signals in conversational speech over IP : A deep learning perspective

    NARCIS (Netherlands)

    Brueckner, Raymond; Schmitt, Maximilian; Pantic, Maja; Schuller, Björn

    2017-01-01

    The automatic detection and classification of social signals is an important task, given the fundamental role nonverbal behavioral cues play in human communication. We present the first cross-lingual study on the detection of laughter and fillers in conversational and spontaneous speech collected

  6. Gesturing through Time: Holds and Intermodal Timing in the Stream of Speech

    Science.gov (United States)

    Park-Doob, Mischa Alan

    2010-01-01

    Most previous work examining co-speech gestures (the spontaneous bodily movements and configurations we engage in during speaking) has emphasized the importance of their most "salient" or energetically expressive moments, known as gesture "strokes" (Kendon 1980). In contrast, in this dissertation I explore the potential functions of intervals of…

  7. Spontaneous Tumor Lysis Syndrome

    Directory of Open Access Journals (Sweden)

    Alicia C. Weeks MD

    2015-08-01

    Full Text Available Tumor lysis syndrome (TLS is a known complication of malignancy and its treatment. The incidence varies on malignancy type, but is most common with hematologic neoplasms during cytotoxic treatment. Spontaneous TLS is thought to be rare. This case study is of a 62-year-old female admitted with multisystem organ failure, with subsequent diagnosis of aggressive B cell lymphoma. On admission, laboratory abnormalities included renal failure, elevated uric acid (20.7 mg/dL, and 3+ amorphous urates on urinalysis. Oliguric renal failure persisted despite aggressive hydration and diuretic use, requiring initiation of hemodialysis prior to chemotherapy. Antihyperuricemic therapy and hemodialysis were used to resolve hyperuricemia. However, due to multisystem organ dysfunction syndrome with extremely poor prognosis, the patient ultimately expired in the setting of a terminal ventilator wean. Although our patient did not meet current TLS criteria, she required hemodialysis due to uric acid nephropathy, a complication of TLS. This poses the clinical question of whether adequate diagnostic criteria exist for spontaneous TLS and if the lack of currently accepted guidelines has resulted in the underestimation of its incidence. Allopurinol and rasburicase are commonly used for prevention and treatment of TLS. Although both drugs decrease uric acid levels, allopurinol mechanistically prevents formation of the substrate rasburicase acts to solubilize. These drugs were administered together in our patient, although no established guidelines recommend combined use. This raises the clinical question of whether combined therapy is truly beneficial or, conversely, detrimental to patient outcomes.

  8. Hearing speech in music

    Directory of Open Access Journals (Sweden)

    Seth-Reino Ekström

    2011-01-01

    Full Text Available The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA noise and speech spectrum-filtered noise (SPN]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA. The results showed a significant effect of piano performance speed and octave (P<.01. Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01 and SPN (P<.05. Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01, but there were smaller differences between masking conditions (P<.01. It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  9. Media Criticism Group Speech

    Science.gov (United States)

    Ramsey, E. Michele

    2004-01-01

    Objective: To integrate speaking practice with rhetorical theory. Type of speech: Persuasive. Point value: 100 points (i.e., 30 points based on peer evaluations, 30 points based on individual performance, 40 points based on the group presentation), which is 25% of course grade. Requirements: (a) References: 7-10; (b) Length: 20-30 minutes; (c)…

  10. Expectations and speech intelligibility.

    Science.gov (United States)

    Babel, Molly; Russell, Jamie

    2015-05-01

    Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs.

  11. Visualizing structures of speech expressiveness

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Jensen, Karl Kristoffer; Graugaard, Lars

    2008-01-01

    vowels and consonants, and which converts the speech energy into visual particles that form complex visual structures, provides us with a mean to present the expressiveness of speech into a visual mode. This system is presented in an artwork whose scenario is inspired from the reasons of language......Speech is both beautiful and informative. In this work, a conceptual study of the speech, through investigation of the tower of Babel, the archetypal phonemes, and a study of the reasons of uses of language is undertaken in order to create an artistic work investigating the nature of speech...

  12. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference

    OpenAIRE

    Byeongwook Lee; Kwang-Hyun Cho

    2016-01-01

    Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintai...

  13. Speech recognition systems on the Cell Broadband Engine

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Jones, H; Vaidya, S; Perrone, M; Tydlitat, B; Nanda, A

    2007-04-20

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousands of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.

  14. Digitized Ethnic Hate Speech: Understanding Effects of Digital Media Hate Speech on Citizen Journalism in Kenya

    Directory of Open Access Journals (Sweden)

    Stephen Gichuhi Kimotho

    2016-06-01

    Full Text Available Ethnicity in Kenya permeates all spheres of life. However, it is in politics that ethnicity is most visible. Election time in Kenya often leads to ethnic competition and hatred, often expressed through various media. Ethnic hate speech characterized the 2007 general elections in party rallies and through text messages, emails, posters and leaflets. This resulted in widespread skirmishes that left over 1200 people dead, and many displaced (KNHRC, 2008. In 2013, however, the new battle zone was the war of words on social media platform. More than any other time in Kenyan history, Kenyans poured vitriolic ethnic hate speech through digital media like Facebook, tweeter and blogs. Although scholars have studied the role and effects of the mainstream media like television and radio in proliferating the ethnic hate speech in Kenya (Michael Chege, 2008; Goldstein & Rotich, 2008a; Ismail & Deane, 2008; Jacqueline Klopp & Prisca Kamungi, 2007, little has been done in regard to social media.  This paper investigated the nature of digitized hate speech by: describing the forms of ethnic hate speech on social media in Kenya; the effects of ethnic hate speech on Kenyan’s perception of ethnic entities; ethnic conflict and ethics of citizen journalism. This study adopted a descriptive interpretive design, and utilized Austin’s Speech Act Theory, which explains use of language to achieve desired purposes and direct behaviour (Tarhom & Miracle, 2013. Content published between January and April 2013 from six purposefully identified blogs was analysed. Questionnaires were used to collect data from university students as they form a good sample of Kenyan population, are most active on social media and are drawn from all parts of the country. Qualitative data were analysed using NVIVO 10 software, while responses from the questionnaire were analysed using IBM SPSS version 21. The findings indicated that Facebook and Twitter were the main platforms used to

  15. Representation of speech variability.

    Science.gov (United States)

    Bent, Tessa; Holt, Rachael F

    2017-07-01

    Speech signals provide both linguistic information (e.g., words and sentences) as well as information about the speaker who produced the message (i.e., social-indexical information). Listeners store highly detailed representations of these speech signals, which are simultaneously indexed with linguistic and social category membership. A variety of methodologies-forced-choice categorization, rating, and free classification-have shed light on listeners' cognitive-perceptual representations of the social-indexical information present in the speech signal. Specifically, listeners can accurately identify some talker characteristics, including native language status, approximate age, sex, and gender. Additionally, listeners have sensitivity to other speaker characteristics-such as sexual orientation, regional dialect, native language for non-native speakers, race, and ethnicity-but listeners tend to be less accurate or more variable at categorizing or rating speakers based on these constructs. However, studies have not necessarily incorporated more recent conceptions of these constructs (e.g., separating listeners' perceptions of race vs ethnicity) or speakers who do not fit squarely into specific categories (e.g., for sex perception, intersex individuals; for gender perception, genderqueer speakers; for race perception, multiracial speakers). Additional research on how the intersections of social-indexical categories influence speech perception is also needed. As the field moves forward, scholars from a variety of disciplines should be incorporated into investigations of how listeners' extract and represent facets of personal identity from speech. Further, the impact of these representations on our interactions with one another in contexts outside of the laboratory should continue to be explored. WIREs Cogn Sci 2017, 8:e1434. doi: 10.1002/wcs.1434 This article is categorized under: Linguistics > Language Acquisition Linguistics > Language in Mind and Brain Psychology

  16. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  17. Speech and the right hemisphere.

    Science.gov (United States)

    Critchley, E M

    1991-01-01

    Two facts are well recognized: the location of the speech centre with respect to handedness and early brain damage, and the involvement of the right hemisphere in certain cognitive functions including verbal humour, metaphor interpretation, spatial reasoning and abstract concepts. The importance of the right hemisphere in speech is suggested by pathological studies, blood flow parameters and analysis of learning strategies. An insult to the right hemisphere following left hemisphere damage can affect residual language abilities and may activate non-propositional inner speech. The prosody of speech comprehension even more so than of speech production-identifying the voice, its affective components, gestural interpretation and monitoring one's own speech-may be an essentially right hemisphere task. Errors of a visuospatial type may occur in the learning process. Ease of learning by actors and when learning foreign languages is achieved by marrying speech with gesture and intonation, thereby adopting a right hemisphere strategy.

  18. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  19. Conversation, speech acts, and memory.

    Science.gov (United States)

    Holtgraves, Thomas

    2008-03-01

    Speakers frequently have specific intentions that they want others to recognize (Grice, 1957). These specific intentions can be viewed as speech acts (Searle, 1969), and I argue that they play a role in long-term memory for conversation utterances. Five experiments were conducted to examine this idea. Participants in all experiments read scenarios ending with either a target utterance that performed a specific speech act (brag, beg, etc.) or a carefully matched control. Participants were more likely to falsely recall and recognize speech act verbs after having read the speech act version than after having read the control version, and the speech act verbs served as better recall cues for the speech act utterances than for the controls. Experiment 5 documented individual differences in the encoding of speech act verbs. The results suggest that people recognize and retain the actions that people perform with their utterances and that this is one of the organizing principles of conversation memory.

  20. Relationship between speech motor control and speech intelligibility in children with speech sound disorders.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Pukonen, Margit; Goshulak, Debra; Yu, Vickie Y; Kadis, Darren S; Kroll, Robert; Pang, Elizabeth W; De Nil, Luc F

    2013-01-01

    The current study was undertaken to investigate the impact of speech motor issues on the speech intelligibility of children with moderate to severe speech sound disorders (SSD) within the context of the PROMPT intervention approach. The word-level Children's Speech Intelligibility Measure (CSIM), the sentence-level Beginner's Intelligibility Test (BIT) and tests of speech motor control and articulation proficiency were administered to 12 children (3:11 to 6:7 years) before and after PROMPT therapy. PROMPT treatment was provided for 45 min twice a week for 8 weeks. Twenty-four naïve adult listeners aged 22-46 years judged the intelligibility of the words and sentences. For CSIM, each time a recorded word was played to the listeners they were asked to look at a list of 12 words (multiple-choice format) and circle the word while for BIT sentences, the listeners were asked to write down everything they heard. Words correctly circled (CSIM) or transcribed (BIT) were averaged across three naïve judges to calculate percentage speech intelligibility. Speech intelligibility at both the word and sentence level was significantly correlated with speech motor control, but not articulatory proficiency. Further, the severity of speech motor planning and sequencing issues may potentially be a limiting factor in connected speech intelligibility and highlights the need to target these issues early and directly in treatment. The reader will be able to: (1) outline the advantages and disadvantages of using word- and sentence-level speech intelligibility tests; (2) describe the impact of speech motor control and articulatory proficiency on speech intelligibility; and (3) describe how speech motor control and speech intelligibility data may provide critical information to aid treatment planning. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Out-of-synchrony speech entrainment in developmental dyslexia.

    Science.gov (United States)

    Molinaro, Nicola; Lizarazu, Mikel; Lallier, Marie; Bourguignon, Mathieu; Carreiras, Manuel

    2016-08-01

    Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. POLISH EMOTIONAL SPEECH RECOGNITION USING ARTIFICAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Paweł Powroźnik

    2014-11-01

    Full Text Available The article presents the issue of emotion recognition based on polish emotional speech analysis. The Polish database of emotional speech, prepared and shared by the Medical Electronics Division of the Lodz University of Technology, has been used for research. The following parameters extracted from sampled and normalised speech signal has been used for the analysis: energy of signal, speaker’s sex, average value of speech signal and both the minimum and maximum sample value for a given signal. As an emotional state a classifier fof our layers of artificial neural network has been used. The achieved results reach 50% of accuracy. Conducted researches focused on six emotional states: a neutral state, sadness, joy, anger, fear and boredom.

  3. Spontaneous Intracranial Hypotension

    International Nuclear Information System (INIS)

    Joash, Dr.

    2015-01-01

    Epidemiology is not only rare but an important cause of new daily persistent headaches among young & middle age individuals. The Etiology & Pathogenesis is generally caused by spinal CSF leak. Precise cause remains largely unknown, underlying structural weakness of spinal meninges is suspected. There are several MR Signs of Intracranial Hypotension that include:- diffuse pachymeningeal (dural) enhancement; bilateral subdural, effusion/hematomas; Downward displacement of brain; enlargement of pituitary gland; Engorgement of dural venous sinuses; prominence of spinal epidural venous plexus and Venous sinus thrombosis & isolated cortical vein thrombosis. The sum of volumes of intracranial blood, CSF & cerebral tissue must remain constant in an intact cranium. Treatment in Many cases can be resolved spontaneously or by use Conservative approach that include bed rest, oral hydration, caffeine intake and use of abdominal binder. Imaging Modalities for Detection of CSF leakage include CT myelography, Radioisotope cisternography, MR myelography, MR imaging and Intrathecal Gd-enhanced MR

  4. Spontaneous wave packet reduction

    International Nuclear Information System (INIS)

    Ghirardi, G.C.

    1994-06-01

    There are taken into account the main conceptual difficulties met by standard quantum mechanics in dealing with physical processes involving macroscopic system. It is stressed how J.A.Wheeler's remarks and lucid analysis have been relevant to pinpoint and to bring to its extreme consequences the puzzling aspects of quantum phenomena. It is shown how the recently proposed models of spontaneous dynamical reduction represent a consistent way to overcome the conceptual difficulties of the standard theory. Obviously, many nontrivial problems remain open, the first and more relevant one being that of generalizing the model theories considered to the relativistic case. This is the challenge of the dynamical reduction program. 43 refs, 2 figs

  5. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  6. Predicting automatic speech recognition performance over communication channels from instrumental speech quality and intelligibility scores

    NARCIS (Netherlands)

    Gallardo, L.F.; Möller, S.; Beerends, J.

    2017-01-01

    The performance of automatic speech recognition based on coded-decoded speech heavily depends on the quality of the transmitted signals, determined by channel impairments. This paper examines relationships between speech recognition performance and measurements of speech quality and intelligibility

  7. Spontaneous compactification to homogeneous spaces

    International Nuclear Information System (INIS)

    Mourao, J.M.

    1988-01-01

    The spontaneous compactification of extra dimensions to compact homogeneous spaces is studied. The methods developed within the framework of coset space dimensional reduction scheme and the most general form of invariant metrics are used to find solutions of spontaneous compactification equations

  8. Screening for spontaneous preterm birth

    NARCIS (Netherlands)

    van Os, M.A.; van Dam, A.J.E.M.

    2015-01-01

    Preterm birth is the most important cause of perinatal morbidity and mortality worldwide. In this thesis studies on spontaneous preterm birth are presented. The main objective was to investigate the predictive capacity of mid-trimester cervical length measurement for spontaneous preterm birth in a

  9. Sensorimotor Interactions in Speech Learning

    Directory of Open Access Journals (Sweden)

    Douglas M Shiller

    2011-10-01

    Full Text Available Auditory input is essential for normal speech development and plays a key role in speech production throughout the life span. In traditional models, auditory input plays two critical roles: 1 establishing the acoustic correlates of speech sounds that serve, in part, as the targets of speech production, and 2 as a source of feedback about a talker's own speech outcomes. This talk will focus on both of these roles, describing a series of studies that examine the capacity of children and adults to adapt to real-time manipulations of auditory feedback during speech production. In one study, we examined sensory and motor adaptation to a manipulation of auditory feedback during production of the fricative “s”. In contrast to prior accounts, adaptive changes were observed not only in speech motor output but also in subjects' perception of the sound. In a second study, speech adaptation was examined following a period of auditory–perceptual training targeting the perception of vowels. The perceptual training was found to systematically improve subjects' motor adaptation response to altered auditory feedback during speech production. The results of both studies support the idea that perceptual and motor processes are tightly coupled in speech production learning, and that the degree and nature of this coupling may change with development.

  10. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.

    Directory of Open Access Journals (Sweden)

    Ling He

    Full Text Available The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the

  11. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.

    Science.gov (United States)

    He, Ling; Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang

    2017-01-01

    The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%.

  12. Speech is Golden

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter

    2014-01-01

    Most of the Danish municipalities are ready to begin to adopt automatic speech recognition, but at the same time remain nervous following a long series of bad business cases in the recent past. Complaints are voiced over costly licences and low service levels, typical effects of a de facto monopoly...... on the supply side. The present article reports on a new public action strategy which has taken shape in the course of 2013-14. While Denmark is a small language area, our public sector is well organised and has considerable purchasing power. Across this past year, Danish local authorities have organised around...... the speech technology challenge, they have formulated a number of joint questions and new requirements to be met by suppliers and have deliberately worked towards formulating tendering material which will allow fair competition. Public researchers have contributed to this work, including the author...

  13. Speech is Golden

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter

    2014-01-01

    on the supply side. The present article reports on a new public action strategy which has taken shape in the course of 2013-14. While Denmark is a small language area, our public sector is well organised and has considerable purchasing power. Across this past year, Danish local authorities have organised around...... the speech technology challenge, they have formulated a number of joint questions and new requirements to be met by suppliers and have deliberately worked towards formulating tendering material which will allow fair competition. Public researchers have contributed to this work, including the author...... of the present article, in the role of economically neutral advisers. The aim of the initiative is to pave the way for the first profitable contract in the field - which we hope to see in 2014 - an event which would precisely break the present deadlock and open up a billion EUR market for speech technology...

  14. Spontaneous Pneumomediastinum: Hamman Syndrome

    Directory of Open Access Journals (Sweden)

    Tushank Chadha, BS

    2018-04-01

    significant fat stranding. The image also showed an intraluminal stent traversing the gastric antrum and gastric pylorus with no indication of obstruction. Circumferential mural thickening of the gastric antrum and body were consistent with the patient’s history of gastric adenocarcinoma. The shotty perigastric lymph nodes with associated fat stranding, along the greater curvature of the distal gastric body suggested local regional nodal metastases and possible peritoneal carcinomatosis. The thoracic CT scans showed extensive pneumomediastinum that tracked into the soft tissues of the neck, which given the history of vomiting also raised concern for esophageal perforation. There was still no evidence of mediastinal abscess or fat stranding. Additionally, a left subclavian vein port catheter, which terminates with tip at the cavoatrial junction of the superior vena cava can also be seen on the image. Discussion: Spontaneous Pneumomediastinum, also known as Hamman syndrome, is defined by the uncommon incidence of free air in the mediastinum due to the bursting of alveoli, as a result of extended spells of shouting, coughing, or vomiting.1,2 The condition is diagnosed when a clear cause (aerodigestive rupture, barotrauma, infection secondary to gas-forming organisms3 for pneumomediastinum cannot be clearly identified on diagnostic studies. Macklin and Macklin were the first to note the pathogenesis of the syndrome and explained that the common denominator to spontaneous pneumomediastinum was that increased alveolar pressure leads to alveolar rupture.3 Common clinical findings for spontaneous pneumomediastinum include: chest pain, dyspnea, cough, and emesis.4 The condition is not always readily recognized on initial presentation in part for its rare incidence, estimated to be approximately 1 in every 44,500 ED patients3and also because of the non-specific presenting symptoms. For this patient, there was no clear singular cause, and therefore she received care for spontaneous

  15. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  16. Speech and voice rehabilitation in selected patients fitted with a bone anchored hearing aid (BAHA).

    Science.gov (United States)

    Thomas, J

    1996-01-01

    With the Birmingham osseointegrated implant programme there have been several patients with severe pre-lingual conductive hearing loss. The majority of these have been patients with Treacher Collins syndrome. There are characteristic features of speech and voice in those with long-standing conductive hearing loss. In addition, the associated abnormalities of jaw, teeth and palate may amplify the problem. There may be spontaneous improvement in features such as voice pitch, quality and intensity following the fitting of a BAHA. However, in those with a pre-lingual hearing impairment, speech therapy may be necessary. Patients assessed as suitable for BAHA have a full assessment of communication skills including audio recording of speech and voice. Post-operative training improves auditory discrimination and perception and is followed by training in the production of the newly perceived speech sounds.

  17. Speech Processing and Recognition (SPaRe)

    Science.gov (United States)

    2011-01-01

    parameters such as duration, audio /video bitrates, audio /video codecs , audio channels, and sample rates. These parameters are automatically populated in the...used to segment each conversation into utterance level audio and transcript files. First, all speech data from the English interviewers and all...News Corpus [12]. The TDT4 corpus includes approximately 200 hours of Mandarin audio with closed-captions, or approximate transcripts. These

  18. Neurophysiology of Speech Differences in Childhood Apraxia of Speech

    Science.gov (United States)

    Preston, Jonathan L.; Molfese, Peter J.; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes. PMID:25090016

  19. IBM MASTOR SYSTEM: Multilingual Automatic Speech-to-speech Translator

    National Research Council Canada - National Science Library

    Gao, Yuqing; Gu, Liang; Zhou, Bowen; Sarikaya, Ruhi; Afify, Mohamed; Kuo, Hong-Kwang; Zhu, Wei-zhong; Deng, Yonggang; Prosser, Charles; Zhang, Wei

    2006-01-01

    .... Challenges include speech recognition and machine translation in adverse environments, lack of training data and linguistic resources for under-studied languages, and the need to rapidly develop...

  20. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  1. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  2. Novel candidate genes and regions for childhood apraxia of speech identified by array comparative genomic hybridization.

    Science.gov (United States)

    Laffin, Jennifer J S; Raca, Gordana; Jackson, Craig A; Strand, Edythe A; Jakielski, Kathy J; Shriberg, Lawrence D

    2012-11-01

    The goal of this study was to identify new candidate genes and genomic copy-number variations associated with a rare, severe, and persistent speech disorder termed childhood apraxia of speech. Childhood apraxia of speech is the speech disorder segregating with a mutation in FOXP2 in a multigenerational London pedigree widely studied for its role in the development of speech-language in humans. A total of 24 participants who were suspected to have childhood apraxia of speech were assessed using a comprehensive protocol that samples speech in challenging contexts. All participants met clinical-research criteria for childhood apraxia of speech. Array comparative genomic hybridization analyses were completed using a customized 385K Nimblegen array (Roche Nimblegen, Madison, WI) with increased coverage of genes and regions previously associated with childhood apraxia of speech. A total of 16 copy-number variations with potential consequences for speech-language development were detected in 12 or half of the 24 participants. The copy-number variations occurred on 10 chromosomes, 3 of which had two to four candidate regions. Several participants were identified with copy-number variations in two to three regions. In addition, one participant had a heterozygous FOXP2 mutation and a copy-number variation on chromosome 2, and one participant had a 16p11.2 microdeletion and copy-number variations on chromosomes 13 and 14. Findings support the likelihood of heterogeneous genomic pathways associated with childhood apraxia of speech.

  3. The effect of vowel height on Voice Onset Time in stop consonants in CV sequences in spontaneous Danish

    DEFF Research Database (Denmark)

    Mortensen, Johannes; Tøndering, John

    2013-01-01

    Voice onset time has been reported to vary with the height of vowels following the stop consonant. This paper investigates the effects of vowel height on VOT in Danish CV sequences with stop consonants in Danish spontaneous speech. A significant effect of vowel height on VOT was found...

  4. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  5. Rehabilitation of Oronasal Speech Disorders

    Directory of Open Access Journals (Sweden)

    Hashem Shemshadi

    2006-09-01

    Full Text Available Oronasal region, as an important organ of taste and smell, being respected for its impact on the resonace, which is crucial for any normal speech production. Different congenital, acquired and/or developmentalpdefect, may not only have impacts on the quality of respiration, phonation, resonance, also on the process of a normal speech. This article will enable readers to have more focus in such important neuroanatomical speech zones disorders and their respective proper rehabilitation methods in different derangements. Among all other defects, oronasal malfunctionings would definitely has an influence on the oronasal sound resonance and furtherly render impairments on a normal speech production. Rehabilitative approach by speech and language pathologist is highly recommended to alleviate most of oronasal speech disorders.

  6. Speech and the Right Hemisphere

    Directory of Open Access Journals (Sweden)

    E. M. R. Critchley

    1991-01-01

    Full Text Available Two facts are well recognized: the location of the speech centre with respect to handedness and early brain damage, and the involvement of the right hemisphere in certain cognitive functions including verbal humour, metaphor interpretation, spatial reasoning and abstract concepts. The importance of the right hemisphere in speech is suggested by pathological studies, blood flow parameters and analysis of learning strategies. An insult to the right hemisphere following left hemisphere damage can affect residual language abilities and may activate non-propositional inner speech. The prosody of speech comprehension even more so than of speech production—identifying the voice, its affective components, gestural interpretation and monitoring one's own speech—may be an essentially right hemisphere task. Errors of a visuospatial type may occur in the learning process. Ease of learning by actors and when learning foreign languages is achieved by marrying speech with gesture and intonation, thereby adopting a right hemisphere strategy.

  7. Precision of working memory for speech sounds.

    Science.gov (United States)

    Joseph, Sabine; Iverson, Paul; Manohar, Sanjay; Fox, Zoe; Scott, Sophie K; Husain, Masud

    2015-01-01

    Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such "quantized" views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.

  8. Abortion and compelled physician speech.

    Science.gov (United States)

    Orentlicher, David

    2015-01-01

    Informed consent mandates for abortion providers may infringe the First Amendment's freedom of speech. On the other hand, they may reinforce the physician's duty to obtain informed consent. Courts can promote both doctrines by ensuring that compelled physician speech pertains to medical facts about abortion rather than abortion ideology and that compelled speech is truthful and not misleading. © 2015 American Society of Law, Medicine & Ethics, Inc.

  9. Spontaneous breaking of supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Zumino, B.

    1981-12-01

    There has been recently a revival of interest in supersymmetric gauge theories, stimulated by the hope that supersymmetry might help in clarifying some of the questions which remain unanswered in the so called Grand Unified Theories and in particular the gauge hierarchy problem. In a Grand Unified Theory one has two widely different mass scales: the unification mass M approx. = 10/sup 15/GeV at which the unification group (e.g. SU(5)) breaks down to SU(3) x SU(2) x U(1) and the mass ..mu.. approx. = 100 GeV at which SU(2) x U(1) is broken down to the U(1) of electromagnetism. There is at present no theoretical understanding of the extreme smallness of the ratio ..mu../M of these two numbers. This is the gauge hierarchy problem. This lecture attempts to review the various mechanisms for spontaneous supersymmetry breaking in gauge theories. Most of the discussions are concerned with the tree approximation, but what is presently known about radiative correction is also reviewed.

  10. Spontaneous intracranial hypotension

    International Nuclear Information System (INIS)

    Haritanti, A.; Karacostas, D.; Drevelengas, A.; Kanellopoulos, V.; Paraskevopoulou, E.; Lefkopoulos, A.; Economou, I.; Dimitriadis, A.S.

    2009-01-01

    Spontaneous intracranial hypotension (SIH) is an uncommon but increasingly recognized syndrome. Orthostatic headache with typical findings on magnetic resonance imaging (MRI) are the key to diagnosis. Delayed diagnosis of this condition may subject patients to unnecessary procedures and prolong morbidity. We describe six patients with SIH and outline the important clinical and neuroimaging findings. They were all relatively young, 20-54 years old, with clearly orthostatic headache, minimal neurological signs (only abducent nerve paresis in two) and diffuse pachymeningeal gadolinium enhancement on brain MRI, while two of them presented subdural hygromas. Spinal MRI was helpful in detecting a cervical cerebrospinal fluid leak in three patients and dilatation of the vertebral venous plexus with extradural fluid collection in another. Conservative management resulted in rapid resolution of symptoms in five patients (10 days-3 weeks) and in one who developed cerebral venous sinus thrombosis, the condition resolved in 2 months. However, this rapid clinical improvement was not accompanied by an analogous regression of the brain MR findings that persisted on a longer follow-up. Along with recent literature data, our patients further point out that SIH, to be correctly diagnosed, necessitates increased alertness by the attending physician, in the evaluation of headaches

  11. Spontaneous lateral temporal encephalocele.

    Science.gov (United States)

    Tuncbilek, Gokhan; Calis, Mert; Akalan, Nejat

    2013-01-01

    A spontaneous encephalocele is one that develops either because of embryological maldevelopment or from a poorly understood postnatal process that permits brain herniation to occur. We here report a rare case of lateral temporal encephalocele extending to the infratemporal fossa under the zygomatic arch. At birth, the infant was noted to have a large cystic mass in the right side of the face. After being operated on initially in another center in the newborn period, the patient was referred to our clinic with a diagnosis of temporal encephalocele. He was 6 months old at the time of admission. Computerized tomography scan and magnetic resonance imaging studies revealed a 8 × 9 cm fluid-filled, multiloculated cystic mass at the right infratemporal fossa. No intracranial pathology or connection is seen. The patient was operated on to reduce the distortion effect of the growing mass. The histopathological examination of the sac revealed well-differentiated mature glial tissue stained with glial fibrillary acid protein. This rare clinical presentation of encephaloceles should be taken into consideration during the evaluation of the lateral facial masses in the infancy period, and possible intracranial connection should be ruled out before surgery to avoid complications.

  12. Speech Recognition on Mobile Devices

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    The enthusiasm of deploying automatic speech recognition (ASR) on mobile devices is driven both by remarkable advances in ASR technology and by the demand for efficient user interfaces on such devices as mobile phones and personal digital assistants (PDAs). This chapter presents an overview of ASR...... in the mobile context covering motivations, challenges, fundamental techniques and applications. Three ASR architectures are introduced: embedded speech recognition, distributed speech recognition and network speech recognition. Their pros and cons and implementation issues are discussed. Applications within...... command and control, text entry and search are presented with an emphasis on mobile text entry....

  13. Psychotic speech: a neurolinguistic perspective.

    Science.gov (United States)

    Anand, A; Wales, R J

    1994-06-01

    The existence of an aphasia-like language disorder in psychotic speech has been the subject of much debate. This paper argues that a discrete language disorder could be an important cause of the disturbance seen in psychotic speech. A review is presented of classical clinical descriptions and experimental studies that have explored the similarities between psychotic language impairment and aphasic speech. The paper proposes neurolinguistic tasks which may be used in future studies to elicit subtle language impairments in psychotic speech. The usefulness of a neurolinguistic model for further research in the aetiology and treatment of psychosis is discussed.

  14. Phonetic Consequences of Speech Disfluency

    National Research Council Canada - National Science Library

    Shriberg, Elizabeth E

    1999-01-01

    .... Analyses of American English show that disfluency affects a variety of phonetic aspects of speech, including segment durations, intonation, voice quality, vowel quality, and coarticulation patterns...

  15. An investigation of co-speech gesture production during action description in Parkinson's disease.

    Science.gov (United States)

    Cleary, Rebecca A; Poliakoff, Ellen; Galpin, Adam; Dick, Jeremy P R; Holler, Judith

    2011-12-01

    Parkinson's disease (PD) can impact enormously on speech communication. One aspect of non-verbal behaviour closely tied to speech is co-speech gesture production. In healthy people, co-speech gestures can add significant meaning and emphasis to speech. There is, however, little research into how this important channel of communication is affected in PD. The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area. Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced. This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    2016-08-26

    ; speech-to-speech translation; language identification. ... interest owing to two strong driving forces. Firstly, technical advances in speech recognition and synthesis are posing new challenges and opportunities to researchers.

  17. Speech Recognition: How Do We Teach It?

    Science.gov (United States)

    Barksdale, Karl

    2002-01-01

    States that growing use of speech recognition software has made voice writing an essential computer skill. Describes how to present the topic, develop basic speech recognition skills, and teach speech recognition outlining, writing, proofreading, and editing. (Contains 14 references.) (SK)

  18. Speech and Language Problems in Children

    Science.gov (United States)

    Children vary in their development of speech and language skills. Health care professionals have lists of milestones ... it may be due to a speech or language disorder. Children who have speech disorders may have ...

  19. Bilateral spontaneous carotid artery dissection.

    Science.gov (United States)

    Townend, Bradley Scott; Traves, Laura; Crimmins, Denis

    2005-06-01

    Bilateral internal carotid artery dissections have been reported, but spontaneous bilateral dissections are rare. Internal carotid artery dissection can present with a spectrum of symptoms ranging from headache to completed stroke. Two cases of spontaneous bilateral carotid artery dissection are presented, one with headache and minimal symptoms and the other with a stroke syndrome. No cause could be found in either case, making the dissections completely spontaneous. Bilateral internal carotid artery dissection (ICAD) should be considered in young patients with unexplained head and neck pain with or without focal neurological symptoms and signs. The increasing availability of imaging would sustain the higher index of suspicion.

  20. Phonatory aerodynamics in connected speech.

    Science.gov (United States)

    Gartner-Schmidt, Jackie L; Hirai, Ryoji; Dastolfo, Christina; Rosen, Clark A; Yu, Lan; Gillespie, Amanda I

    2015-12-01

    1) Present phonatory aerodynamic data for healthy controls (HCs) in connected speech; 2) contrast these findings between HCs and patients with nontreated unilateral vocal fold paralysis (UVFP); 3) present pre- and post-vocal fold augmentation outcomes for patients with UVFP; 4) contrast data from patients with post-operative laryngeal augmentation to HCs. Retrospective, single-blinded. For phase I, 20 HC participants were recruited. For phase II, 20 patients with UVFP were age- and gender-matched to the 20 HC participants used in phase I. For phase III, 20 patients with UVFP represented a pre- and posttreatment cohort. For phase IV, 20 of the HC participants from phase I and 20 of the postoperative UVFP patients from phase III were used for direct comparison. Aerodynamic measures captured from a sample of the Rainbow Passage included: number of breaths, mean phonatory airflow rate, total duration of passage, inspiratory airflow duration, and expiratory airflow duration. The VHI-10 was also obtained pre- and postoperative laryngeal augmentation. All phonatory aerodynamic measures were significantly increased in patients with preoperative UVFP than the HC group. Patients with laryngeal augmentation took significantly less breaths, had less mean phonatory airflow rate during voicing, and had shorter inspiratory airflow duration than the preoperative UVFP group. None of the postoperative measures returned to HC values. Significant improvement in the Voice Handicap Index-10 scores postlaryngeal augmentation was also found. Methodology described in this study improves upon existing aerodynamic voice assessment by capturing characteristics germane to UVFP patient complaints and measuring change before and after laryngeal augmentation in connected speech. 4. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  1. Propositional speech in unselected stroke: The effect of genre and external support.

    Science.gov (United States)

    Law, Bonnie; Young, Breanne; Pinsker, Donna; Robinson, Gail A

    2015-01-01

    Distinguished from nominal language, propositional language generation refers to the spontaneous and voluntary aspect of language that introduces novel concepts to a specific context. Propositional language can be impaired in a range of neurological disorders, including stroke, despite well-preserved nominal language. Although external support can increase speech rate in patients with reduced propositional speech, no specific investigation of propositional speech has been carried out in unselected stroke patients. The current study investigated propositional language in an unselected post-acute stroke group (N = 18) with mild cognitive impairment and prominent executive dysfunction, but without significant aphasia. Specifically, we investigated whether genre or external support affected the number of words, sentences, and novel ideas produced, compared to healthy controls (N = 27). Results showed that discourse genre was not associated with differential performances. By contrast, speech quantity increased without external support although, for stroke patients, speech novelty decreased. Overall, the novelty deficit in unselected stroke patients highlights the importance of assessing cognition and propositional speech. Our findings suggest that for stroke patients with mild cognitive deficits, including executive dysfunction, introducing external support improved speech quality but not quantity. Implications for both assessment and rehabilitation of social communication are discussed.

  2. Speech-Based Human and Service Robot Interaction: An Application for Mexican Dysarthric People

    Directory of Open Access Journals (Sweden)

    Santiago Omar Caballero Morales

    2013-01-01

    Full Text Available Dysarthria is a motor speech disorder due to weakness or poor coordination of the speech muscles. This condition can be caused by a stroke, traumatic brain injury, or by a degenerative neurological disease. Commonly, people with this disorder also have muscular dystrophy, which restricts their use of switches or keyboards for communication or control of assistive devices (i.e., an electric wheelchair or a service robot. In this case, speech recognition is an attractive alternative for interaction and control of service robots, despite the difficulty of achieving robust recognition performance. In this paper we present a speech recognition system for human and service robot interaction for Mexican Spanish dysarthric speakers. The core of the system consisted of a Speaker Adaptive (SA recognition system trained with normal-speech. Features such as on-line control of the language model perplexity and the adding of vocabulary, contribute to high recognition performance. Others, such as assessment and text-to-speech (TTS synthesis, contribute to a more complete interaction with a service robot. Live tests were performed with two mild dysarthric speakers, achieving recognition accuracies of 90–95% for spontaneous speech and 95–100% of accomplished simulated service robot tasks.

  3. AutoMOS: Learning a non-intrusive assessor of naturalness-of-speech

    OpenAIRE

    Patton, Brian; Agiomyrgiannakis, Yannis; Terry, Michael; Wilson, Kevin; Saurous, Rif A.; Sculley, D.

    2016-01-01

    Developers of text-to-speech synthesizers (TTS) often make use of human raters to assess the quality of synthesized speech. We demonstrate that we can model human raters' mean opinion scores (MOS) of synthesized speech using a deep recurrent neural network whose inputs consist solely of a raw waveform. Our best models provide utterance-level estimates of MOS only moderately inferior to sampled human ratings, as shown by Pearson and Spearman correlations. When multiple utterances are scored an...

  4. An optimal speech processor for efficient human speech ...

    Indian Academy of Sciences (India)

    above, the speech signal is recorded at 21739 Hz for English subjects and 20000 Hz for. Cantonese and Georgian subjects. We downsampled the speech signals to 16 kHz for our anal- ysis. Using these parallel acoustic and articulatory data from Cantonese and Georgian, we will be able to examine our communication ...

  5. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Directory of Open Access Journals (Sweden)

    Sid-Ahmed Selouani

    2009-01-01

    Full Text Available Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  6. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  7. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  8. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  9. High-frequency energy in singing and speech

    Science.gov (United States)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  10. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ...) has used its existing technology in phonetic speech recognition, audio signal processing, and multilingual language translation to design and demonstrate an advanced audio interface for speech...

  11. Spontaneous intraorbital hematoma: case report

    Directory of Open Access Journals (Sweden)

    Vinodan Paramanathan

    2010-12-01

    Full Text Available Vinodan Paramanathan, Ardalan ZolnourianQueen's Hospital NHS Foundation Trust, Burton on Trent, Staffordshire DE13 0RB, UKAbstract: Spontaneous intraorbital hematoma is an uncommon clinical entity seen in ophthalmology practice. It is poorly represented in the literature. Current evidence attributes it to orbital trauma, neoplasm, vascular malformations, acute sinusitis, and systemic abnormalities. A 65-year-old female presented with spontaneous intraorbital hematoma manifesting as severe ocular pains, eyelid edema, proptosis, and diplopia, without a history of trauma. Computer tomography demonstrated a fairly well defined extraconal lesion with opacification of the paranasal sinuses. The principal differential based on all findings was that of a spreading sinus infection and an extraconal tumor. An unprecedented finding of a spontaneous orbital hematoma was discovered when the patient was taken to theater. We discuss the rarity of this condition and its management.Keywords: hemorrhage, ophthalmology, spontaneous, intra-orbital, hematoma

  12. Teaching Speech Acts

    Directory of Open Access Journals (Sweden)

    Teaching Speech Acts

    2007-01-01

    Full Text Available In this paper I argue that pragmatic ability must become part of what we teach in the classroom if we are to realize the goals of communicative competence for our students. I review the research on pragmatics, especially those articles that point to the effectiveness of teaching pragmatics in an explicit manner, and those that posit methods for teaching. I also note two areas of scholarship that address classroom needs—the use of authentic data and appropriate assessment tools. The essay concludes with a summary of my own experience teaching speech acts in an advanced-level Portuguese class.

  13. Speech recognition employing biologically plausible receptive fields

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Bothe, Hans-Heinrich

    2011-01-01

    The main idea of the project is to build a widely speaker-independent, biologically motivated automatic speech recognition (ASR) system. The two main differences between our approach and current state-of-the-art ASRs are that i) the features used here are based on the responses of neuronlike...... Model-based adaptation procedures. Two databases are used, TI46 for discrete speech a subset of the TIMIT database collected from speakers belonging to the New York dialect region. Each of the selection of 10 sentences is uttered once by each of 35 speakers. The major differences between the two data...... sets initiate the development and comparison of two distinct ASRs within the project, which will be presented in the following. Employing a reduced sampling frequency and bandwidth of the signals, the ASR algorithm reaches and goes beyond recognition results that are known from humans....

  14. Glossolalic speech from a psycholinguistic perspective.

    Science.gov (United States)

    Osser, H A; Ostwald, P F; Macwhinney, B; Casey, R L

    1973-03-01

    This is a psycholinguistic study of glossolalia produced by four speakers in an experimental setting. Acoustical patterns (signal waveform, fundamental frequency, and amplitude changes) were compared. The frequency of occurrence of vowels and consonants was computed for the glossolalic samples and compared with General American English. The results showed that three of the four speakers had substantially higher vowel-to-consonant ratios than are found in English speech. Phonology, morphology, and syntax of the four glossolalic productions were analyzed. This revealed two distinct forms of glossolalia. One form, which we called "formulaic" tends towards stereotypy and repetitiousness. The second form, which we called "innovative" shows more novelty and unpredictability in the chaining of speech-like elements. These contrastive forms of glossolalia may relate to dimensions of linguistic creativity. Precise correlates with personality patterns, educational backgrounds, psychopathology, and other sociolinguistic variables remain to be employed.

  15. Spontaneous ischaemic stroke in dogs

    DEFF Research Database (Denmark)

    Gredal, Hanne Birgit; Skerritt, G. C.; Gideon, P.

    2013-01-01

    Translation of experimental stroke research into the clinical setting is often unsuccessful. Novel approaches are therefore desirable. As humans, pet dogs suffer from spontaneous ischaemic stroke and may hence offer new ways of studying genuine stroke injury mechanisms.......Translation of experimental stroke research into the clinical setting is often unsuccessful. Novel approaches are therefore desirable. As humans, pet dogs suffer from spontaneous ischaemic stroke and may hence offer new ways of studying genuine stroke injury mechanisms....

  16. Spontaneity and international marketing performance

    OpenAIRE

    Souchon, Anne L.; Hughes, Paul; Farrell, Andrew M.; Nemkova, Ekaterina; Oliveira, Joao S.

    2016-01-01

    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Purpose – The purpose of this paper is to ascertain how today’s international marketers can perform better on the global scene by harnessing spontaneity. Design/methodology/approach – The authors draw on contingency theory to develop a model of the spontaneity – international marketing performance relationship, and identify three potential m...

  17. Investigation of the Reationship between Hand Gestures and Speech in Adults Who Stutter

    Directory of Open Access Journals (Sweden)

    Ali Barikrou

    2008-12-01

    Full Text Available Objective: Gestures of the hands and arms have long been observed to accompany speech in spontaneous conversation. However, the way in which these two modes of expression are related in production is not yet fully understood. So, the present study aims to investigate the spontaneous gestures that accompany speech in adults who stutter in comparison to fluent controls.  Materials & Methods: In this cross-sectional and comparative research, ten adults who stutter were selected randomly from speech and language pathology clinics and compared with ten healthy persons as control group who were matched with stutterers according to sex, age and education. The cartoon story-retelling task used to elicit spontaneous gestures that accompany speech. Participants were asked to watch the animation carefully and then retell the storyline in as much detail as possible to a listener sitting across from him or her and his or her narration was video recorded simultaneously. Then recorded utterances and gestures were analyzed. The statistical methods such as Kolmogorov- Smirnov and Independent t-test were used for data analyzing. Results: The results indicated that stutterers in comparison to controls in average use fewer iconic gestures in their narration (P=0.005. Also, stutterers in comparison to controls in average use fewer iconic gestures per each utterance and word (P=0.019. Furthermore, the execution of gesture production during moments of dysfluency revealed that more than 70% of the gestures produced with stuttering were frozen or abandoned at the moment of dysfluency. Conclusion: It seems gesture and speech have such an intricate and deep association that show similar frequency and timing patterns and move completely parallel to each other in such a way that deficit in speech results in deficiency in hand gesture.

  18. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... Some of the history of gradual infusion of the modulation spectrum concept into Automatic recognition of speech (ASR) comes next, pointing to the relationship of modulation spectrum processing to wellaccepted ASR techniques such as dynamic speech features or RelAtive SpecTrAl (RASTA) filtering. Next ...

  19. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  20. Methods of Teaching Speech Recognition

    Science.gov (United States)

    Rader, Martha H.; Bailey, Glenn A.

    2010-01-01

    Objective: This article introduces the history and development of speech recognition, addresses its role in the business curriculum, outlines related national and state standards, describes instructional strategies, and discusses the assessment of student achievement in speech recognition classes. Methods: Research methods included a synthesis of…

  1. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Some of the history of gradual infusion of the modulation spectrum concept into Automatic recognition of speech (ASR) comes next, pointing to the relationship of modulation spectrum processing to wellaccepted ASR techniques such as dynamic speech features or RelAtive SpecTrAl (RASTA) filtering. Next, the frequency ...

  2. Indirect speech acts in English

    OpenAIRE

    Василина, В. Н.

    2013-01-01

    The article deals with indirect speech acts in Englishspeaking discourse. Different approaches to their analysis and the reasons for their use are discussed. It is argued that the choice of the form of speech actsdepends on the parameters of communicative partners.

  3. Speech Prosody in Cerebellar Ataxia

    Science.gov (United States)

    Casper, Maureen A.; Raphael, Lawrence J.; Harris, Katherine S.; Geibel, Jennifer M.

    2007-01-01

    Persons with cerebellar ataxia exhibit changes in physical coordination and speech and voice production. Previously, these alterations of speech and voice production were described primarily via perceptual coordinates. In this study, the spatial-temporal properties of syllable production were examined in 12 speakers, six of whom were healthy…

  4. Perceptual Learning of Interrupted Speech

    NARCIS (Netherlands)

    Benard, Michel Ruben; Başkent, Deniz

    2013-01-01

    The intelligibility of periodically interrupted speech improves once the silent gaps are filled with noise bursts. This improvement has been attributed to phonemic restoration, a top-down repair mechanism that helps intelligibility of degraded speech in daily life. Two hypotheses were investigated

  5. Speech fluency profile on different tasks for individuals with Parkinson's disease.

    Science.gov (United States)

    Juste, Fabiola Staróbole; Andrade, Claudia Regina Furquim de

    2017-07-20

    To characterize the speech fluency profile of patients with Parkinson's disease. Study participants were 40 individuals of both genders aged 40 to 80 years divided into 2 groups: Research Group - RG (20 individuals with diagnosis of Parkinson's disease) and Control Group - CG (20 individuals with no communication or neurological disorders). For all of the participants, three speech samples involving different tasks were collected: monologue, individual reading, and automatic speech. The RG presented a significant larger number of speech disruptions, both stuttering-like and typical dysfluencies, and higher percentage of speech discontinuity in the monologue and individual reading tasks compared with the CG. Both groups presented reduced number of speech disruptions (stuttering-like and typical dysfluencies) in the automatic speech task; the groups presented similar performance in this task. Regarding speech rate, individuals in the RG presented lower number of words and syllables per minute compared with those in the CG in all speech tasks. Participants of the RG presented altered parameters of speech fluency compared with those of the CG; however, this change in fluency cannot be considered a stuttering disorder.

  6. VOCAL DEVELOPMENT AS A MAIN CONDITION IN EARLY SPEECH AND LANGUAGE ACQUISITION

    Directory of Open Access Journals (Sweden)

    Marianne HOLM

    2005-06-01

    Full Text Available The objective of this research is the evident positive vocal development in pre-lingual deaf children, who underwent a Cochlea Implantation in early age. The presented research compares the vocal speech expressions of three hearing impaired children and two children with normal hearing from 10 months to 5 years. Comparisons of the spontaneous vocal expressions were conducted by sonagraphic analyses. The awareness of the own voice as well as the voices of others is essential for the child’s continuous vocal development from crying to speech. Supra-segmental factors, such as rhythm, dynamics and melody play a very important role in this development.

  7. Spontaneous cooperation for prosocials, but not for proselfs: Social value orientation moderates spontaneous cooperation behavior

    Science.gov (United States)

    Mischkowski, Dorothee; Glöckner, Andreas

    2016-01-01

    Cooperation is essential for the success of societies and there is an ongoing debate whether individuals have therefore developed a general spontaneous tendency to cooperate or not. Findings that cooperative behavior is related to shorter decision times provide support for the spontaneous cooperation effect, although contrary results have also been reported. We show that cooperative behavior is better described as person × situation interaction, in that there is a spontaneous cooperation effect for prosocial but not for proself persons. In three studies, one involving population representative samples from the US and Germany, we found that cooperation in a public good game is dependent on an interaction between individuals’ social value orientation and decision time. Increasing deliberation about the dilemma situation does not affect persons that are selfish to begin with, but it is related to decreasing cooperation for prosocial persons that gain positive utility from outcomes of others and score high on the related general personality trait honesty/humility. Our results demonstrate that the spontaneous cooperation hypothesis has to be qualified in that it is limited to persons with a specific personality and social values. Furthermore, they allow reconciling conflicting previous findings by identifying an important moderator for the effect. PMID:26876773

  8. Improved Vocabulary Production after Naming Therapy in Aphasia: Can Gains in Picture Naming Generalise to Connected Speech?

    Science.gov (United States)

    Conroy, Paul; Sage, Karen; Ralph, Matt Lambon

    2009-01-01

    Background: Naming accuracy for nouns and verbs in aphasia can vary across different elicitation contexts, for example, simple picture naming, composite picture description, narratives, and conversation. For some people with aphasia, naming may be more accurate to simple pictures as opposed to naming in spontaneous, connected speech; for others,…

  9. An Investigation of effective factors on nurses\\' speech errors

    Directory of Open Access Journals (Sweden)

    Maryam Tafaroji yeganeh

    2017-03-01

    Full Text Available Background : Speech errors are a branch of psycholinguistic science. Speech error or slip of tongue is a natural process that happens to everyone. The importance of this research is because of sensitivity and importance of nursing in which the speech errors may be interfere in the treatment of patients, but unfortunately no research has been done yet in this field.This research has been done to study the factors (personality, stress, fatigue and insomnia which cause speech errors happen to nurses of Ilam province. Materials and Methods: The sample of this correlation-descriptive research consists of 50 nurses working in Mustafa Khomeini Hospital of Ilam province who were selected randomly. Our data were collected using The Minnesota Multiphasic Personality Inventory, NEO-Five Factor Inventory and Expanded Nursing Stress Scale, and were analyzed using SPSS version 20, descriptive, inferential and multivariate linear regression or two-variable statistical methods (with significant level: p≤0. 05. Results: 30 (60% of nurses participating in the study were female and 19 (38% were male. In this study, all three factors (type of personality, stress and fatigue have significant effects on nurses' speech errors Conclusion: 30 (60% of nurses participating in the study were female and 19 (38% were male. In this study, all three factors (type of personality, stress and fatigue have significant effects on nurses' speech errors.

  10. Perspectives on the role of the speech and language therapist in palliative care: An international survey.

    Science.gov (United States)

    O'Reilly, Aoife C; Walshe, Margaret

    2015-09-01

    Speech and language therapists can improve the quality of life of people receiving palliative care through the management of communication and swallowing difficulties (dysphagia). However, their role in this domain is poorly defined and little is understood about the current international professional practice in this field. To examine how speech and language therapists perceive their role in the delivery of palliative care to clients, to discover current international speech and language therapist practices and to explore the similarities and differences in speech and language therapists' practice in palliative care internationally. This will inform professional clinical guidelines and practice in this area. Anonymous, non-experimental, cross-sectional survey design. Speech and language therapists working with adult and paediatric palliative care populations in Republic of Ireland, United Kingdom, United States, Canada, Australia and New Zealand where the speech and language therapist profession is well established. Purposive and snowball sampling were used to recruit participants internationally using gatekeepers. An online survey was disseminated using Survey Monkey (http://www.surveymonkey.com). A total of 322 speech and language therapists responded to the survey. Speech and language therapist practices in palliative care were similar across continents. Current speech and language therapist practices along with barriers and facilitators to practice were identified. The need for a speech and language therapist professional position paper on this topic was emphasised by respondents. Internationally, speech and language therapists believe they have a role in palliative care. The speech and language therapist respondents highlighted that this area of practice is under-resourced, under-acknowledged and poorly developed. They highlighted the need for additional research as well as specialist training and education for speech and language therapists and other

  11. Indonesian Text-To-Speech System Using Diphone Concatenative Synthesis

    Directory of Open Access Journals (Sweden)

    Sutarman

    2015-02-01

    Full Text Available In this paper, we describe the design and develop a database of Indonesian diphone synthesis using speech segment of recorded voice to be converted from text to speech and save it as audio file like WAV or MP3. In designing and develop a database of Indonesian diphone there are several steps to follow; First, developed Diphone database includes: create a list of sample of words consisting of diphones organized by prioritizing looking diphone located in the middle of a word if not at the beginning or end; recording the samples of words by segmentation. ;create diphones made with a tool Diphone Studio 1.3. Second, develop system using Microsoft Visual Delphi 6.0, includes: the conversion system from the input of numbers, acronyms, words, and sentences into representations diphone. There are two kinds of conversion (process alleged in analyzing the Indonesian text-to-speech system. One is to convert the text to be sounded to phonem and two, to convert the phonem to speech. Method used in this research is called Diphone Concatenative synthesis, in which recorded sound segments are collected. Every segment consists of a diphone (2 phonems. This synthesizer may produce voice with high level of naturalness. The Indonesian Text to Speech system can differentiate special phonemes like in ‘Beda’ and ‘Bedak’ but sample of other spesific words is necessary to put into the system. This Indonesia TTS system can handle texts with abbreviation, there is the facility to add such words.

  12. The application of sparse linear prediction dictionary to compressive sensing in speech signals

    Directory of Open Access Journals (Sweden)

    YOU Hanxu

    2016-04-01

    Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

  13. School-Based Speech-Language Pathologists' Use of iPads

    Science.gov (United States)

    Romane, Garvin Philippe

    2017-01-01

    This study explored school-based speech-language pathologists' (SLPs') use of iPads and apps for speech and language instruction, specifically for articulation, language, and vocabulary goals. A mostly quantitative-based survey was administered to approximately 2,800 SLPs in a K-12 setting; the final sample consisted of 189 licensed SLPs. Overall,…

  14. Perceptual Measures of Speech from Individuals with Parkinson's Disease and Multiple Sclerosis: Intelligibility and beyond

    Science.gov (United States)

    Sussman, Joan E.; Tjaden, Kris

    2012-01-01

    Purpose: The primary purpose of this study was to compare percent correct word and sentence intelligibility scores for individuals with multiple sclerosis (MS) and Parkinson's disease (PD) with scaled estimates of speech severity obtained for a reading passage. Method: Speech samples for 78 talkers were judged, including 30 speakers with MS, 16…

  15. Speech Sound Disorders in Preschool Children: Correspondence between Clinical Diagnosis and Teacher and Parent Report

    Science.gov (United States)

    Harrison, Linda J.; McLeod, Sharynne; McAllister, Lindy; McCormack, Jane

    2017-01-01

    This study sought to assess the level of correspondence between parent and teacher report of concern about young children's speech and specialist assessment of speech sound disorders (SSD). A sample of 157 children aged 4-5 years was recruited in preschools and long day care centres in Victoria and New South Wales (NSW). SSD was assessed…

  16. Production Variability and Single Word Intelligibility in Aphasia and Apraxia of Speech

    Science.gov (United States)

    Haley, Katarina L.; Martin, Gwenyth

    2011-01-01

    This study was designed to estimate test-retest reliability of orthographic speech intelligibility testing in speakers with aphasia and AOS and to examine its relationship to the consistency of speaker and listener responses. Monosyllabic single word speech samples were recorded from 13 speakers with coexisting aphasia and AOS. These words were…

  17. Effects of Culture and Gender in Comprehension of Speech Acts of Indirect Request

    Science.gov (United States)

    Shams, Rabe'a; Afghari, Akbar

    2011-01-01

    This study investigates the comprehension of indirect request speech act used by Iranian people in daily communication. The study is an attempt to find out whether different cultural backgrounds and the gender of the speakers affect the comprehension of the indirect request of speech act. The sample includes thirty males and females in Gachsaran…

  18. Speech and language intervention in bilinguals

    Directory of Open Access Journals (Sweden)

    Eliane Ramos

    2011-12-01

    Full Text Available Increasingly, speech and language pathologists (SLPs around the world are faced with the unique set of issues presented by their bilingual clients. Some professional associations in different countries have presented recommendations when assessing and treating bilingual populations. In children, most of the studies have focused on intervention for language and phonology/ articulation impairments and very few focus on stuttering. In general, studies of language intervention tend to agree that intervention in the first language (L1 either increase performance on L2 or does not hinder it. In bilingual adults, monolingual versus bilingual intervention is especially relevant in cases of aphasia; dysarthria in bilinguals has been barely approached. Most studies of cross-linguistic effects in bilingual aphasics have focused on lexical retrieval training. It has been noted that even though a majority of studies have disclosed a cross-linguistic generalization from one language to the other, some methodological weaknesses are evident. It is concluded that even though speech and language intervention in bilinguals represents a most important clinical area in speech language pathology, much more research using larger samples and controlling for potentially confounding variables is evidently required.

  19. A rough set approach to speech recognition

    Science.gov (United States)

    Zhao, Zhigang

    1992-09-01

    Speech recognition is a very difficult classification problem due to the variations in loudness, speed, and tone of voice. In the last 40 years, many methodologies have been developed to solve this problem, but most lack learning ability and depend fully on the knowledge of human experts. Systems of this kind are hard to develop and difficult to maintain and upgrade. A study was conducted to investigate the feasibility of using a machine learning approach in solving speech recognition problems. The system is based on rough set theory. It first generates a set of decision rules using a set of reference words called training samples, and then uses the decision rules to recognize new words. The main feature of this system is that, under the supervision of human experts, the machine learns and applies knowledge on its own to the designated tasks. The main advantages of this system over a traditional system are its simplicity and adaptiveness, which suggest that it may have significant potential in practical applications of computer speech recognition. Furthermore, the studies presented demonstrate the potential application of rough-set based learning systems in solving other important pattern classification problems, such as character recognition, system fault detection, and trainable robotic control.

  20. A case of spontaneous ventriculocisternostomy

    International Nuclear Information System (INIS)

    Yamane, Kanji; Yoshimoto, Hisanori; Harada, Kiyoshi; Uozumi, Tohru; Kuwabara, Satoshi.

    1983-01-01

    The authors experienced a case of spontaneous ventriculocisternostomy diagnosed by CT scan with metrizamide and Conray. Patient was 23-year-old male who had been in good health until one month before admission, when he began to have headache and tinnitus. He noticed bilateral visual acuity was decreased about one week before admission and vomiting appeared two days before admission. He was admitted to our hospital because of bilateral papilledema and remarkable hydrocephalus diagnosed by CT scan. On admission, no abnormal neurological signs except for bilateral papilledema were noted. Immediately, right ventricular drainage was performed. Pressure of the ventricle was over 300mmH 2 O and CSF was clear. PVG and PEG disclosed an another cavity behind the third ventricle, which was communicated with the third ventricle, and occlusion of aqueduct of Sylvius. Metrizamide CT scan and Conray CT scan showed a communication between this cavity and quadrigeminal and supracerebellar cisterns. On these neuroradiological findings, the diagnosis of obstructive hydrocephalus due to benign aqueduct stenosis accompanied with spontaneous ventriculocisternostomy was obtained. Spontaneous ventriculocisternostomy was noticed to produce arrest of hydrocephalus, but with our case, spontaneous regression of such symptoms did not appeared. By surgical ventriculocisternostomy (method by Torkildsen, Dandy, or Scarff), arrest of hydrocephalus was seen in about 50 to 70 per cent, which was the same results as those of spontaneous ventriculocisternostomy. It is concluded that VP shunt or VA shunt is thought to be better treatment of obstructive hydrocephalus than the various kinds of surgical ventriculocisternostomy. (J.P.N.)

  1. Optical antenna enhanced spontaneous emission.

    Science.gov (United States)

    Eggleston, Michael S; Messer, Kevin; Zhang, Liming; Yablonovitch, Eli; Wu, Ming C

    2015-02-10

    Atoms and molecules are too small to act as efficient antennas for their own emission wavelengths. By providing an external optical antenna, the balance can be shifted; spontaneous emission could become faster than stimulated emission, which is handicapped by practically achievable pump intensities. In our experiments, InGaAsP nanorods emitting at ∼ 200 THz optical frequency show a spontaneous emission intensity enhancement of 35 × corresponding to a spontaneous emission rate speedup ∼ 115 ×, for antenna gap spacing, d = 40 nm. Classical antenna theory predicts ∼ 2,500 × spontaneous emission speedup at d ∼ 10 nm, proportional to 1/d(2). Unfortunately, at d antenna efficiency drops below 50%, owing to optical spreading resistance, exacerbated by the anomalous skin effect (electron surface collisions). Quantum dipole oscillations in the emitter excited state produce an optical ac equivalent circuit current, I(o) = qω|x(o)|/d, feeding the antenna-enhanced spontaneous emission, where q|x(o)| is the dipole matrix element. Despite the quantum-mechanical origin of the drive current, antenna theory makes no reference to the Purcell effect nor to local density of states models. Moreover, plasmonic effects are minor at 200 THz, producing only a small shift of antenna resonance frequency.

  2. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  3. Coevolution of Human Speech and Trade

    NARCIS (Netherlands)

    Horan, R.D.; Bulte, E.H.; Shogren, J.F.

    2008-01-01

    We propose a paleoeconomic coevolutionary explanation for the origin of speech in modern humans. The coevolutionary process, in which trade facilitates speech and speech facilitates trade, gives rise to multiple stable trajectories. While a `trade-speech¿ equilibrium is not an inevitable outcome for

  4. The "Checkers" Speech and Televised Political Communication.

    Science.gov (United States)

    Flaningam, Carl

    Richard Nixon's 1952 "Checkers" speech was an innovative use of television for political communication. Like television news itself, the campaign fund crisis behind the speech can be thought of in the same terms as other television melodrama, with the speech serving as its climactic episode. The speech adapted well to television because…

  5. Predicting masking release of lateralized speech

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; MacDonald, Ewen; Dau, Torsten

    2016-01-01

    Locsei et al. (2015) [Speech in Noise Workshop, Copenhagen, 46] measured ˝ speech reception thresholds (SRTs) in anechoic conditions where the target speech and the maskers were lateralized using interaural time delays. The maskers were speech-shaped noise (SSN) and reversed babble with 2, 4, or 8...

  6. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy What's in ... coughing, gagging, and refusing foods. Specialists in Speech-Language Therapy Speech-language pathologists (SLPs), often informally known as ...

  7. Neural and Behavioral Mechanisms of Clear Speech

    Science.gov (United States)

    Luque, Jenna Silver

    2017-01-01

    Clear speech is a speaking style that has been shown to improve intelligibility in adverse listening conditions, for various listener and talker populations. Clear-speech phonetic enhancements include a slowed speech rate, expanded vowel space, and expanded pitch range. Although clear-speech phonetic enhancements have been demonstrated across a…

  8. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2000-10-19

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  9. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2004-04-20

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  10. Theater, Speech, Light

    Directory of Open Access Journals (Sweden)

    Primož Vitez

    2011-07-01

    Full Text Available This paper considers a medium as a substantial translator: an intermediary between the producers and receivers of a communicational act. A medium is a material support to the spiritual potential of human sources. If the medium is a support to meaning, then the relations between different media can be interpreted as a space for making sense of these meanings, a generator of sense: it means that the interaction of substances creates an intermedial space that conceives of a contextualization of specific meaningful elements in order to combine them into the sense of a communicational intervention. The theater itself is multimedia. A theatrical event is a communicational act based on a combination of several autonomous structures: text, scenography, light design, sound, directing, literary interpretation, speech, and, of course, the one that contains all of these: the actor in a human body. The actor is a physical and symbolic, anatomic, and emblematic figure in the synesthetic theatrical act because he reunites in his body all the essential principles and components of theater itself. The actor is an audio-visual being, made of kinetic energy, speech, and human spirit. The actor’s body, as a source, instrument, and goal of the theater, becomes an intersection of sound and light. However, theater as intermedial art is no intermediate practice; it must be seen as interposing bodies between conceivers and receivers, between authors and auditors. The body is not self-evident; the body in contemporary art forms is being redefined as a privilege. The art needs bodily dimensions to explore the medial qualities of substances: because it is alive, it returns to studying biology. The fact that theater is an archaic art form is also the purest promise of its future.

  11. Speech enhancement theory and practice

    CERN Document Server

    Loizou, Philipos C

    2013-01-01

    With the proliferation of mobile devices and hearing devices, including hearing aids and cochlear implants, there is a growing and pressing need to design algorithms that can improve speech intelligibility without sacrificing quality. Responding to this need, Speech Enhancement: Theory and Practice, Second Edition introduces readers to the basic problems of speech enhancement and the various algorithms proposed to solve these problems. Updated and expanded, this second edition of the bestselling textbook broadens its scope to include evaluation measures and enhancement algorithms aimed at impr

  12. Computational neuroanatomy of speech production.

    Science.gov (United States)

    Hickok, Gregory

    2012-01-05

    Speech production has been studied predominantly from within two traditions, psycholinguistics and motor control. These traditions have rarely interacted, and the resulting chasm between these approaches seems to reflect a level of analysis difference: whereas motor control is concerned with lower-level articulatory control, psycholinguistics focuses on higher-level linguistic processing. However, closer examination of both approaches reveals a substantial convergence of ideas. The goal of this article is to integrate psycholinguistic and motor control approaches to speech production. The result of this synthesis is a neuroanatomically grounded, hierarchical state feedback control model of speech production.

  13. Visual speech influences speech perception immediately but not automatically.

    Science.gov (United States)

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  14. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  15. Spontaneous subcapsular and perirrenal hemorrhage

    International Nuclear Information System (INIS)

    Fuster, M.J.; Saez, J.; Perez-Paya, F.J.; Fernandez, F.

    1997-01-01

    To assess the role of CT in the etiologic diagnosis of spontaneous subcapsular and perirrenal hemorrhage. The CT findings are described in 13 patients presenting subcapsular and perirrenal hemorrhage. Those patients in whom the bleeding was not spontaneous were excluded. Surgical confirmation was obtained in nine cases. In 11 of the 13 cases (84.6%), involving five adenocarcinomas, five angiomyolipoma, two complicated cysts and one case of panarterities nodosa, CT disclosed the underlying pathology. In two cases (15.4%), it only revealed the extension of the hematoma, but gave no clue to its origin. CT is the technique of choice when spontaneous subcapsular and perirrenal hemorrhage is suspected since, in most cases, it reveals the underlying pathology. (Author)

  16. Audio-visual speech perception: a developmental ERP investigation

    Science.gov (United States)

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  17. Discourse Analysis of the Political Speeches of the Ousted Arab Presidents during the Arab Spring Revolution Using Halliday and Hasan's Framework of Cohesion

    Science.gov (United States)

    Al-Majali, Wala'

    2015-01-01

    This study is designed to explore the salient linguistic features of the political speeches of the ousted Arab presidents during the Arab Spring Revolution. The sample of the study is composed of seven political speeches delivered by the ousted Arab presidents during the period from December 2010 to December 2012. Three speeches were delivered by…

  18. The impact of language co-activation on L1 and L2 speech fluency.

    Science.gov (United States)

    Bergmann, Christopher; Sprenger, Simone A; Schmid, Monika S

    2015-10-01

    Fluent speech depends on the availability of well-established linguistic knowledge and routines for speech planning and articulation. A lack of speech fluency in late second-language (L2) learners may point to a deficiency of these representations, due to incomplete acquisition. Experiments on bilingual language processing have shown, however, that there are strong reasons to believe that multilingual speakers experience co-activation of the languages they speak. We have studied to what degree language co-activation affects fluency in the speech of bilinguals, comparing a monolingual German control group with two bilingual groups: 1) first-language (L1) attriters, who have fully acquired German before emigrating to an L2 English environment, and 2) immersed L2 learners of German (L1: English). We have analysed the temporal fluency and the incidence of disfluency markers (pauses, repetitions and self-corrections) in spontaneous film retellings. Our findings show that learners to speak more slowly than controls and attriters. Also, on each count, the speech of at least one of the bilingual groups contains more disfluency markers than the retellings of the control group. Generally speaking, both bilingual groups-learners and attriters-are equally (dis)fluent and significantly more disfluent than the monolingual speakers. Given that the L1 attriters are unaffected by incomplete acquisition, we interpret these findings as evidence for language competition during speech production. Copyright © 2015. Published by Elsevier B.V.

  19. Silent reading of direct versus indirect speech activates voice-selective areas in the auditory cortex.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2011-10-01

    In human communication, direct speech (e.g., Mary said: "I'm hungry") is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, for silent reading, the representational consequences of this distinction are still unclear. Although many of us share the intuition of an "inner voice," particularly during silent reading of direct speech statements in text, there has been little direct empirical confirmation of this experience so far. Combining fMRI with eye tracking in human volunteers, we show that silent reading of direct versus indirect speech engenders differential brain activation in voice-selective areas of the auditory cortex. This suggests that readers are indeed more likely to engage in perceptual simulations (or spontaneous imagery) of the reported speaker's voice when reading direct speech as opposed to meaning-equivalent indirect speech statements as part of a more vivid representation of the former. Our results may be interpreted in line with embodied cognition and form a starting point for more sophisticated interdisciplinary research on the nature of auditory mental simulation during reading.

  20. Spontaneous isolated celiac artery dissection

    Directory of Open Access Journals (Sweden)

    Tuba Cimilli Ozturk

    2011-01-01

    Full Text Available Dyspepsia with mild, stabbing epigastric discomfort without history of trauma is a very common symptom that emergency physicians see in their daily practice. Vascular emergencies, mostly the aortic dissection and aneurysm, are always described in the differential diagnosis with persistent symptoms. Isolated celiac artery dissection occurring spontaneously is a very rare diagnosis. The involvement of branch vessels is generally observed and patients show various clinical signs and symptoms according to the involved branch vessel. Here we are presenting a case with spontaneous isolated celiac artery dissection, without any branch vessel involvement or visceral damage, detected by computed tomography scans taken on admission.

  1. Spontaneous waves in muscle fibres

    Energy Technology Data Exchange (ETDEWEB)

    Guenther, Stefan; Kruse, Karsten [Department of Theoretical Physics, Saarland University, 66041 Saarbruecken (Germany); Max Planck Institute for the Physics of Complex Systems, Noethnitzer Street 38, 01187 Dresden (Germany)

    2007-11-15

    Mechanical oscillations are important for many cellular processes, e.g. the beating of cilia and flagella or the sensation of sound by hair cells. These dynamic states originate from spontaneous oscillations of molecular motors. A particularly clear example of such oscillations has been observed in muscle fibers under non-physiological conditions. In that case, motor oscillations lead to contraction waves along the fiber. By a macroscopic analysis of muscle fiber dynamics we find that the spontaneous waves involve non-hydrodynamic modes. A simple microscopic model of sarcomere dynamics highlights mechanical aspects of the motor dynamics and fits with the experimental observations.

  2. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A characterization of verb use in Turkish agrammatic narrative speech.

    Science.gov (United States)

    Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien

    2016-01-01

    This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.

  4. Analysis and removing noise from speech using wavelet transform

    Science.gov (United States)

    Tomala, Karel; Voznak, Miroslav; Partila, Pavol; Rezac, Filip; Safarik, Jakub

    2013-05-01

    The paper discusses the use of Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT) wavelet in removing noise from voice samples and evaluation of its impact on speech quality. One significant part of Quality of Service (QoS) in communication technology is the speech quality assessment. However, this part is seriously overlooked as telecommunication providers often focus on increasing network capacity, expansion of services offered and their enforcement in the market. Among the fundamental factors affecting the transmission properties of the communication chain is noise, either at the transmitter or the receiver side. A wavelet transform (WT) is a modern tool for signal processing. One of the most significant areas in which wavelet transforms are used is applications designed to suppress noise in signals. To remove noise from the voice sample in our experiment, we used the reference segment of the voice which was distorted by Gaussian white noise. An evaluation of the impact on speech quality was carried out by an intrusive objective algorithm Perceptual Evaluation of Speech Quality (PESQ). DWT and SWT transformation was applied to voice samples that were devalued by Gaussian white noise. Afterwards, we determined the effectiveness of DWT and SWT by means of objective algorithm PESQ. The decisive criterion for determining the quality of a voice sample once the noise had been removed was Mean Opinion Score (MOS) which we obtained in PESQ. The contribution of this work lies in the evaluation of efficiency of wavelet transformation to suppress noise in voice samples.

  5. Spatial localization of speech segments

    DEFF Research Database (Denmark)

    Karlsen, Brian Lykkegaard

    1999-01-01

    Much is known about human localization of simple stimuli like sinusoids, clicks, broadband noise and narrowband noise in quiet. Less is known about human localization in noise. Even less is known about localization of speech and very few previous studies have reported data from localization...... of speech in noise. This study attempts to answer the question: ``Are there certain features of speech which have an impact on the human ability to determine the spatial location of a speaker in the horizontal plane under adverse noise conditions?''. The study consists of an extensive literature survey...... the task of the experiment. The psychoacoustical experiment used naturally-spoken Danish consonant-vowel combinations as targets presented in diffuse speech-shaped noise at a peak SNR of -10 dB. The subjects were normal hearing persons. The experiment took place in an anechoic chamber where eight...

  6. Comparison of speech and language therapy techniques for speech problems in Parkinson's disease

    OpenAIRE

    Herd, CP; Tomlinson, CL; Deane, KHO; Brady, MC; Smith, CH; Sackley, CM; Clarke, CE

    2012-01-01

    Patients with Parkinson's disease commonly suffer from speech and voice difficulties such as impaired articulation and reduced loudness. Speech and language therapy (SLT) aims to improve the intelligibility of speech with behavioural treatment techniques or instrumental aids.

  7. Microchimerism after induced or spontaneous abortion.

    Science.gov (United States)

    Sato, Tomoko; Fujimori, Keiya; Sato, Akira; Ohto, Hitoshi

    2008-09-01

    To investigate fetomaternal microchimerism in women with induced abortion or spontaneous pregnancy loss. Peripheral blood samples were obtained from 76 healthy women who underwent dilation and curettage in the first trimester but had never had an abortion or male delivery before. Samples were collected at three time points: just before, 7 days after, and 30 days after abortion. Y chromosome-specific, nested polymerase chain reaction targeting the sex-determining region of Y (SRY) was used to test DNA extracted from buffy coat cells. DNA was also extracted from the chorion to determine sex. The sensitivity of our assay allowed detection of approximately one male cell in 100,000 female cells. Thirty-six male and 40 female chorions were obtained. Male DNA was found in 52.8% of women who had a male chorion before abortion, decreasing to 5.6% at 7 days after abortion. At 30 days after abortion, no male DNA was detected. Male DNA was never detected at any point from women with a female chorion. Fetal cells in the maternal circulation are undetectable 30 days after induced abortion or spontaneous pregnancy loss. Fetal cells may be harbored in maternal organs.

  8. Human Papillomavirus Infection as a Possible Cause of Spontaneous Abortion and Spontaneous Preterm Delivery

    DEFF Research Database (Denmark)

    Ambühl, Lea Maria Margareta; Baandrup, Ulrik; Dybkær, Karen

    2016-01-01

    , and 10.9% (95% CI; 10.1–11.7) for umbilical cord blood. Summary estimates for HPV prevalence of spontaneous abortions and spontaneous preterm deliveries, in cervix (spontaneous abortions: 24.5%, and pretermdeliveries: 47%, resp.) and placenta (spontaneous abortions: 24.9%, and preterm deliveries: 50......%, resp.), were identified to be higher compared to normal full-term pregnancies (푃 spontaneous abortion, spontaneous preterm...

  9. Analysis of the Roles and the Dynamics of Breathy and Whispery Voice Qualities in Dialogue Speech

    OpenAIRE

    Norihiro Hagita; Carlos Toshinori Ishi; Hiroshi Ishiguro

    2010-01-01

    Breathy and whispery voices are nonmodal phonations produced by an air escape through the glottis and may carry important linguistic or paralinguistic information (intentions, attitudes, and emotions), depending on the language. Analyses on spontaneous dialogue speech utterances of several speakers show that breathy and whispery voices are related with the expression of a variety of emotion- or attitude-related paralinguistic information. Potential acoustic parameters for characterizing breat...

  10. Censored: Whistleblowers and impossible speech

    OpenAIRE

    Kenny, Kate

    2017-01-01

    What happens to a person who speaks out about corruption in their organization, and finds themselves excluded from their profession? In this article, I argue that whistleblowers experience exclusions because they have engaged in ‘impossible speech’, that is, a speech act considered to be unacceptable or illegitimate. Drawing on Butler’s theories of recognition and censorship, I show how norms of acceptable speech working through recruitment practices, alongside the actions of colleagues, can ...

  11. Identifying Deceptive Speech Across Cultures

    Science.gov (United States)

    2016-06-25

    collection of deceptive and non-deceptive speech recorded from interviews between native speaker of Mandarin and of English instructed to answer...report, such as final, technical, interim, memorandum, master’s thesis, progress , quarterly, research, special, group study, etc. 3. DATES COVERED...non-deceptive speech recorded from interviews between native speaker of Mandarin and of English and are currently completing the use of this data to

  12. Semi-spontaneous oral text production: measurements in clinical practice.

    Science.gov (United States)

    Lind, Marianne; Kristoffersen, Kristian Emil; Moen, Inger; Simonsen, Hanne Gram

    2009-12-01

    Functionally relevant assessment of the language production of speakers with aphasia should include assessment of connected speech production. Despite the ecological validity of everyday conversations, more controlled and monological types of texts may be easier to obtain and analyse in clinical practice. This article discusses some simple measurements for the analysis of semi-spontaneous oral text production by speakers with aphasia. Specifically, the measurements are related to the production of verbs and nouns, and the realization of different sentence types. The proposed measurements should be clinically relevant, easily applicable, and linguistically meaningful. The measurements have been applied to oral descriptions of the 'Cookie Theft' picture by eight monolingual Norwegian speakers, four with an anomic type of aphasia and four without any type of language impairment. Despite individual differences in both the clinical and the non-clinical group, most of the measurements seem to distinguish between speakers with and without aphasia.

  13. Look Who’s Talking NOW! Parentese Speech, Social Context, and Language Development Across Time

    Directory of Open Access Journals (Sweden)

    Nairán Ramírez-Esparza

    2017-06-01

    Full Text Available In previous studies, we found that the social interactions infants experience in their everyday lives at 11- and 14-months of age affect language ability at 24 months of age. These studies investigated relationships between the speech style (i.e., parentese speech vs. standard speech and social context [i.e., one-on-one (1:1 vs. group] of language input in infancy and later speech development (i.e., at 24 months of age, controlling for socioeconomic status (SES. Results showed that the amount of exposure to parentese speech-1:1 in infancy was related to productive vocabulary at 24 months. The general goal of the present study was to investigate changes in (1 the pattern of social interactions between caregivers and their children from infancy to childhood and (2 relationships among speech style, social context, and language learning across time. Our study sample consisted of 30 participants from the previously published infant studies, evaluated at 33 months of age. Social interactions were assessed at home using digital first-person perspective recordings of the auditory environment. We found that caregivers use less parentese speech-1:1, and more standard speech-1:1, as their children get older. Furthermore, we found that the effects of parentese speech-1:1 in infancy on later language development at 24 months persist at 33 months of age. Finally, we found that exposure to standard speech-1:1 in childhood was the only social interaction that related to concurrent word production/use. Mediation analyses showed that standard speech-1:1 in childhood fully mediated the effects of parentese speech-1:1 in infancy on language development in childhood, controlling for SES. This study demonstrates that engaging in one-on-one interactions in infancy and later in life has important implications for language development.

  14. Spontaneous emission by moving atoms

    International Nuclear Information System (INIS)

    Meystre, P.; Wilkens, M.

    1994-01-01

    It is well known that spontaneous emission is not an intrinsic atomic property, but rather results from the coupling of the atom to the vacuum modes of the electromagnetic field. As such, it can be modified by tailoring the electromagnetic environment into which the atom can radiate. This was already realized by Purcell, who noted that the spontaneous emission rate can be enhanced if the atom placed inside a cavity is resonant with one of the cavity is resonant with one of the cavity modes, and by Kleppner, who discussed the opposite case of inhibited spontaneous emission. It has also been recognized that spontaneous emission need not be an irreversible process. Indeed, a system consisting of a single atom coupled to a single mode of the electromagnetic field undergoes a periodic exchange of excitation between the atom and the field. This periodic exchange remains dominant as long as the strength of the coupling between the atom and a cavity mode is itself dominant. 23 refs., 6 figs

  15. Spontaneous Development of Moral Concepts

    Science.gov (United States)

    Siegal, M.

    1975-01-01

    Moral competence is more difficult to attain than scientific competence. Since language comprehension plays a central role in conceptual development, and moral language is difficult to learn, there is a common deficiency in moral conceptual development. This suggests a theory of non-spontaneous solutions to moral problems. (Author/MS)

  16. Shell theorem for spontaneous emission

    DEFF Research Database (Denmark)

    Kristensen, Philip Trøst; Mortensen, Jakob Egeberg; Lodahl, Peter

    2013-01-01

    and therefore is given exactly by the dipole approximation theory. This surprising result is a spontaneous emission counterpart to the shell theorems of classical mechanics and electrostatics and provides insights into the physics of mesoscopic emitters as well as great simplifications in practical calculations....

  17. Prediction of Spontaneous Preterm Birth

    NARCIS (Netherlands)

    Dijkstra, Karolien

    2002-01-01

    Preterm birth is a leading cause of neonatal morbidity and mortality. It is a major goal in obstetrics to lower the incidence of spontaneous preterm birth (SPB) and related neonatal morbidity and mortality. One of the principal objectives is to discover early markers that would allow us to identify

  18. EAMJ Dec. Spontaneous.indd

    African Journals Online (AJOL)

    2008-12-12

    Dec 12, 2008 ... surgical abortion at one month gestation without any complication. The second pregnancy which was a year prior resulted in a spontaneous miscarriage at two months followed by evacuation of retained products of conception with no post abortion complications. Antibiotics were taken following both.

  19. Spontaneous fission of superheavy nuclei

    Indian Academy of Sciences (India)

    the Yukawa-plus-exponential potential. The microscopic shell and pairing corrections are obtained using the Strutinsky and BCS approaches and the cranking formulae yield the inertia tensor. Finally, the WKB method is used to calculate penetrabilities and spontaneous fission half-lives. Calculations are performed for the ...

  20. Externalizing the private experience of pain: a role for co-speech gestures in pain communication?

    Science.gov (United States)

    Rowbotham, Samantha; Lloyd, Donna M; Holler, Judith; Wearden, Alison

    2015-01-01

    Despite the importance of effective pain communication, talking about pain represents a major challenge for patients and clinicians because pain is a private and subjective experience. Focusing primarily on acute pain, this article considers the limitations of current methods of obtaining information about the sensory characteristics of pain and suggests that spontaneously produced "co-speech hand gestures" may constitute an important source of information here. Although this is a relatively new area of research, we present recent empirical evidence that reveals that co-speech gestures contain important information about pain that can both add to and clarify speech. Following this, we discuss how these findings might eventually lead to a greater understanding of the sensory characteristics of pain, and to improvements in treatment and support for pain sufferers. We hope that this article will stimulate further research and discussion of this previously overlooked dimension of pain communication.

  1. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  2. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based on this mo......This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...

  3. Oral motor deficits in speech-impaired children with autism.

    Science.gov (United States)

    Belmonte, Matthew K; Saxena-Chandhok, Tanushree; Cherian, Ruth; Muneer, Reema; George, Lisa; Karanth, Prathibha

    2013-01-01

    Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual.

  4. Oral Motor Deficits in Speech-Impaired Children with Autism

    Directory of Open Access Journals (Sweden)

    Matthew K Belmonte

    2013-07-01

    Full Text Available Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive versus expressive speech / language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills and 90 (for oral motor skills typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual.

  5. Novel Techniques for Dialectal Arabic Speech Recognition

    CERN Document Server

    Elmahdy, Mohamed; Minker, Wolfgang

    2012-01-01

    Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and M...

  6. Neural bases of accented speech perception

    Directory of Open Access Journals (Sweden)

    Patti eAdank

    2015-10-01

    Full Text Available The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Adank, Evans, Stuart-Smith, & Scott, 2009; Floccia, Goslin, Girard, & Konopczynski, 2006. Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented speech, are beginning to be identified. This review will outline neural bases associated with perception of accented speech in the light of current models of speech perception, and compare these data to brain areas associated with processing other speech distortions. We will subsequently evaluate competing models of speech processing with regards to neural processing of accented speech. See Cristia et al. (2012 for an in-depth overview of behavioural aspects of accent processing.

  7. Online crowdsourcing for efficient rating of speech: a validation study.

    Science.gov (United States)

    McAllister Byun, Tara; Halpin, Peter F; Szeredi, Daniel

    2015-01-01

    Blinded listener ratings are essential for valid assessment of interventions for speech disorders, but collecting these ratings can be time-intensive and costly. This study evaluated the validity of speech ratings obtained through online crowdsourcing, a potentially more efficient approach. 100 words from children with /r/ misarticulation were electronically presented for binary rating by 35 phonetically trained listeners and 205 naïve listeners recruited through the Amazon Mechanical Turk (AMT) crowdsourcing platform. Bootstrapping was used to compare different-sized samples of AMT listeners against a "gold standard" (mode across all trained listeners) and an "industry standard" (mode across bootstrapped samples of three trained listeners). There was strong overall agreement between trained and AMT listeners. The "industry standard" level of performance was matched by bootstrapped samples with n = 9 AMT listeners. These results support the hypothesis that valid ratings of speech data can be obtained in an efficient manner through AMT. Researchers in communication disorders could benefit from increased awareness of this method. Readers will be able to (a) discuss advantages and disadvantages of data collection through the crowdsourcing platform Amazon Mechanical Turk (AMT), (b) describe the results of a validity study comparing samples of AMT listeners versus phonetically trained listeners in a speech-rating task. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Real time PCR mediated determination of the spontaneous ...

    African Journals Online (AJOL)

    The study evaluates the utility of Real Time PCR (RT-PCR) in quantitative and qualitative analysis of alleles in sorghum populations and the spontaneous occurrence of Sorghum bicolor alleles in wild populations of sorghum. Leaf and seed material from wild sorghum accesions were sampled in Homabay, Siaya and Busia ...

  9. Spontaneous haemoperitoneum in pregnancy and endometriosis: a case series

    NARCIS (Netherlands)

    Lier, M. van; Malik, R.F.; Waesberghe, J. van; Maas, J.W.; Rumpt-van de Geest, D.A. van; Coppus, S.F.P.J.; Berger, J.P.; Rijn, B.B. van; Janssen, P.F.; Boer, M.A. de; Vries, J.I.P. de; Jansen, F.W.; Brosens, I.A.; Lambalk, C.B.; Mijatovic, V.

    2017-01-01

    OBJECTIVE: To report pregnancy outcomes of SHiP (spontaneous haemoperitoneum in pregnancy) and the association with endometriosis. DESIGN: Retrospective case note review. SETTING: Dutch referral hospitals for endometriosis. SAMPLE: Eleven women presenting with 15 events of SHiP. METHODS: In

  10. Spontaneous haemoperitoneum in pregnancy and endometriosis : a case series

    NARCIS (Netherlands)

    Lier, McI; Malik, R F; van Waesberghe, Jhtm; Maas, J W; van Rumpt-van de Geest, D A; Coppus, S F; Berger, J P; van Rijn, B B; Janssen, P F; de Boer, M. A; de Vries, Jip; Jansen, F. W.; Brosens, I A; Lambalk, C B; Mijatovic, V

    OBJECTIVE: To report pregnancy outcomes of SHiP (spontaneous haemoperitoneum in pregnancy) and the association with endometriosis. DESIGN: Retrospective case note review. SETTING: Dutch referral hospitals for endometriosis. SAMPLE: Eleven women presenting with 15 events of SHiP. METHODS: In

  11. Gaze aversion to stuttered speech: a pilot study investigating differential visual attention to stuttered and fluent speech.

    Science.gov (United States)

    Bowers, Andrew L; Crawcour, Stephen C; Saltuklaroglu, Tim; Kalinowski, Joseph

    2010-01-01

    People who stutter are often acutely aware that their speech disruptions, halted communication, and aberrant struggle behaviours evoke reactions in communication partners. Considering that eye gaze behaviours have emotional, cognitive, and pragmatic overtones for communicative interactions and that previous studies have indicated increased physiological arousal in listeners in response to stuttering, it was hypothesized that stuttered speech incurs increased gaze aversion relative to fluent speech. The possible importance in uncovering these visible reactions to stuttering is that they may contribute to the social penalty associated with stuttering. To compare the eye gaze responses of college students while observing and listening to fluent and severely stuttered speech samples produced by the same adult male who stutters. Twelve normally fluent adult college students watched and listened to three 20-second audio-video clips of the face of an adult male stuttering and three 20-second clips of the same male producing fluent speech. Their pupillary movements were recorded with an eye-tracking device and mapped to specific regions of interest (that is, the eyes, the nose and the mouth of the speaker). Participants spent 39% more time fixating on the speaker's eyes while witnessing fluent speech compared with stuttered speech. In contrast, participants averted their direct eye gaze more often and spent 45% more time fixating on the speaker's nose when witnessing stuttered speech compared with fluent speech. These relative time differences occurred as a function of the number of fixations in each area of interest. Thus, participants averted their gaze from the eyes of the speaker more frequently during the stuttered stimuli than the fluent stimuli. This laboratory study provides pilot data suggesting that gaze aversion is a salient response to the breakdown in communication that occurs during stuttering. This response may occur as a result of emotional, cognitive, and

  12. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  13. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  14. Methods for real-time speech processing on Unix

    Energy Technology Data Exchange (ETDEWEB)

    Romberger, A.

    1982-01-01

    The author discusses computer programming done at the University of California, Berkeley, in support of research work in the area of speech analysis and synthesis. The purpose of this programming is to set up a system for doing real-time speech sampling using the Unix operating system. Two alternative approaches to real time work on Unix are discussed. The first approach is to do the real-time input/output on a secondary (satellite) machine that is not running Unix. The second approach is to do the real-time input/output on the main machine with the aid of special hardware.

  15. Speech characteristics of miners with black lung disease (pneumoconiosis).

    Science.gov (United States)

    Gilbert, H R

    1975-06-01

    Speech samples were obtained from 10 miners with diagnosed black lung disease and 10 nonminers who had never worked in a dusty environment and who had no history of respiratory diseases. Frequency, intensity and durational measures were used as a basis upon which to compare the two groups. Results indicated that four of the six pausal measures, vowel duration, vowel intensity variation and vowel perturbation differentiated the miners from the nonminers. The results indicate that black lung disease may affect not only respiratory physiology associated with speech production but also laryngeal physiology.

  16. Morphosyntax and phonological awareness in children with speech sound disorders.

    Science.gov (United States)

    Mortimer, Jennifer; Rvachew, Susan

    2008-12-01

    The goals of the current study were to examine concurrent and longitudinal relationships of expressive morphosyntax and phonological awareness in a group of children with speech sound disorders. Tests of phonological awareness were administered to 38 children at the end of their prekindergarten and kindergarten years. Speech samples were elicited and analyzed to obtain a set of expressive morphosyntax variables. Finite verb morphology and inflectional suffix use by prekindergarten children were found to predict significant unique variance in change in phonological awareness a year later. These results are consistent with previous research showing finite verb morphology to be a sensitive indicator of language impairment in English.

  17. An illustration of speech articulation impairment in children with cerebral palsy tested by the Goldman-Fristoe method

    Directory of Open Access Journals (Sweden)

    Ade Pungky Rusmarini

    2009-03-01

    Full Text Available Seventy percent of children with cerebral palsy were found to suffer from speech articulation impairment. The purpose of this research was to obtain an illustration of speech articulation impair­ment in children with cerebral palsy tested by the Goldman-Fristoe method at the SLB-D School for Dis­abled Children Bandung in 2007. This was a descriptive research. Sampling was carried out by purposive sampling. The speech articulation impairment test was carried out on the basis of the Goldman-Fristoe method, that is, an articulation test which places the consonant at the beginning, middle, and at the end of a word, to test speech articulation impairment in children with cerebral palsy. Research results indicated that speech articulation impairment in the bilabial consonants /p/,/b/, and /m/ is the average 85.51%. Speech articulation impairment of the labiodental consonants /f/ and /v/ is an average 89.13%. Speech articulation impairment of the alveolar or dental consonants /t/ and /d/ is an average of 80.43%. Speech articulation impairment in the palatal consonants /c/ is an average of 82.60%. Speech articulation impairment in velar consonants /k/ and glottal consonants /h/ is an average of 86.96%. Re­search results indicated that more than three-fourths of children with cerebral palsy at the SLB-D School for Disabled Children Bandung in 2007 suffered from speech articulation impairment.

  18. Spontaneous Retropharyngeal Emphysema: A Case Report | Chi ...

    African Journals Online (AJOL)

    ... is a rare clinical condition in pediatric otolaryngology. The predominant symptoms are sore throat, odynophagia, dysphagia, and neck pain. Here, we report a case of spontaneous retropharyngeal emphysema. Keywords: Iatrogenic injury, retropharyngeal emphysema, spontaneous retropharyngeal emphysem, trauma ...

  19. La maladie de Grisel : Spontaneous atlantoaxial subluxation

    NARCIS (Netherlands)

    Meek, MF; Robinson, PH; Hermens, RAEC

    Objective: "La maladie de Grisel" (Grisel's syndrome) is a spontaneously occurring atlantoaxial subluxation with torticollis. We present a case of atlantoaxial subluxation occurring in a 20-year period of pharyngoplasty surgery. The occurrence of a "spontaneous" atlantoaxial subluxation after oral

  20. Automated Intelligibility Assessment of Pathological Speech Using Phonological Features

    Directory of Open Access Journals (Sweden)

    Catherine Middag

    2009-01-01

    Full Text Available It is commonly acknowledged that word or phoneme intelligibility is an important criterion in the assessment of the communication efficiency of a pathological speaker. People have therefore put a lot of effort in the design of perceptual intelligibility rating tests. These tests usually have the drawback that they employ unnatural speech material (e.g., nonsense words and that they cannot fully exclude errors due to listener bias. Therefore, there is a growing interest in the application of objective automatic speech recognition technology to automate the intelligibility assessment. Current research is headed towards the design of automated methods which can be shown to produce ratings that correspond well with those emerging from a well-designed and well-performed perceptual test. In this paper, a novel methodology that is built on previous work (Middag et al., 2008 is presented. It utilizes phonological features, automatic speech alignment based on acoustic models that were trained on normal speech, context-dependent speaker feature extraction, and intelligibility prediction based on a small model that can be trained on pathological speech samples. The experimental evaluation of the new system reveals that the root mean squared error of the discrepancies between perceived and computed intelligibilities can be as low as 8 on a scale of 0 to 100.

  1. Psychoacoustic cues to emotion in speech prosody and music.

    Science.gov (United States)

    Coutinho, Eduardo; Dibben, Nicola

    2013-01-01

    There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.

  2. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  3. Contextual variability during speech-in-speech recognition.

    Science.gov (United States)

    Brouwer, Susanne; Bradlow, Ann R

    2014-07-01

    This study examined the influence of background language variation on speech recognition. English listeners performed an English sentence recognition task in either "pure" background conditions in which all trials had either English or Dutch background babble or in mixed background conditions in which the background language varied across trials (i.e., a mix of English and Dutch or one of these background languages mixed with quiet trials). This design allowed the authors to compare performance on identical trials across pure and mixed conditions. The data reveal that speech-in-speech recognition is sensitive to contextual variation in terms of the target-background language (mis)match depending on the relative ease/difficulty of the test trials in relation to the surrounding trials.

  4. Spontaneous regression of intracranial malignant lymphoma. Case report

    Energy Technology Data Exchange (ETDEWEB)

    Kojo, Nobuto; Tokutomi, Takashi; Eguchi, Gihachirou; Takagi, Shigeyuki; Matsumoto, Tomie; Sasaguri, Yasuyuki; Shigemori, Minoru.

    1988-05-01

    In a 46-year-old female with a 1-month history of gait and speech disturbances, computed tomography (CT) demonstrated mass lesions of slightly high density in the left basal ganglia and left frontal lobe. The lesions were markedly enhanced by contrast medium. The patient received no specific treatment, but her clinical manifestations gradually abated and the lesions decreased in size. Five months after her initial examination, the lesions were absent on CT scans; only a small area of low density remained. Residual clinical symptoms included mild right hemiparesis and aphasia. After 14 months the patient again deteriorated, and a CT scan revealed mass lesions in the right frontal lobe and the pons. However, no enhancement was observed in the previously affected regions. A biopsy revealed malignant lymphoma. Despite treatment with steroids and radiation, the patient's clinical status progressively worsened and she died 27 months after initial presentation. Seven other cases of spontaneous regression of primary malignant lymphoma have been reported. In this case, the mechanism of the spontaneous regression was not clear, but changes in immunologic status may have been involved.

  5. Systematics of spontaneous positron lines

    International Nuclear Information System (INIS)

    Mueller, U.; Reus, T. de; Reinhardt, J.; Mueller, B.; Greiner, W.

    1985-08-01

    Dynamical and spontaneous positron emission are investigated for heavy-ion collisions with long time delay using a semiclassical description. Numerical results and analytical expressions for the characteristic quantities of the resulting spontaneous positron line, i.e., its position, width, and cross section, are compared. The expected behaviour of the line position and cross section and its visibility against the spectrum of dynamically created positrons is discussed in dependence of the united charge Zsub(u) of projectile and target nucleus in a range of systems from Zsub(u)=180 up to Zsub(u)=188. The results are confronted with presently available experimental data, and possible implications on further experiments are worked out. (orig.)

  6. Spontaneous Rotational Inversion in Phycomyces

    KAUST Repository

    Goriely, Alain

    2011-03-01

    The filamentary fungus Phycomyces blakesleeanus undergoes a series of remarkable transitions during aerial growth. During what is known as the stagea IV growth phase, the fungus extends while rotating in a counterclockwise manner when viewed from above (stagea IVa) and then, while continuing to grow, spontaneously reverses to a clockwise rotation (stagea IVb). This phase lasts for 24-48Ah and is sometimes followed by yet another reversal (stageAIVc) before the overall growth ends. Here, we propose a continuum mechanical model of this entire process using nonlinear, anisotropic, elasticity and show how helical anisotropy associated with the cell wall structure can induce spontaneous rotation and, under appropriate circumstances, the observed reversal of rotational handedness. © 2011 American Physical Society.

  7. Spontaneous regression of colon cancer.

    Science.gov (United States)

    Kihara, Kyoichi; Fujita, Shin; Ohshiro, Taihei; Yamamoto, Seiichiro; Sekine, Shigeki

    2015-01-01

    A case of spontaneous regression of transverse colon cancer is reported. A 64-year-old man was diagnosed as having cancer of the transverse colon at a local hospital. Initial and second colonoscopy examinations revealed a typical cancer of the transverse colon, which was diagnosed as moderately differentiated adenocarcinoma. The patient underwent right hemicolectomy 6 weeks after the initial colonoscopy. The resected specimen showed only a scar at the tumor site, and no cancerous tissue was proven histologically. The patient is alive with no evidence of recurrence 1 year after surgery. Although an antitumor immune response is the most likely explanation, the exact nature of the phenomenon was unclear. We describe this rare case and review the literature pertaining to spontaneous regression of colorectal cancer. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Management of intractable spontaneous epistaxis

    Science.gov (United States)

    Rudmik, Luke

    2012-01-01

    Background: Epistaxis is a common otolaryngology emergency and is often controlled with first-line interventions such as cautery, hemostatic agents, or anterior nasal packing. A subset of patients will continue to bleed and require more aggressive therapy. Methods: Intractable spontaneous epistaxis was traditionally managed with posterior nasal packing and prolonged hospital admission. In an effort to reduce patient morbidity and shorten hospital stay, surgical and endovascular techniques have gained popularity. A literature review was conducted. Results: Transnasal endoscopic sphenopalatine artery ligation and arterial embolization provide excellent control rates but the decision to choose one over the other can be challenging. The role of transnasal endoscopic anterior ethmoid artery ligation is unclear but may be considered in certain cases when bleeding localizes to the ethmoid region. Conclusion: This article will focus on the management of intractable spontaneous epistaxis and discuss the role of endoscopic arterial ligation and embolization as it pertains to this challenging clinical scenario. PMID:22391084

  9. Voice Activity Detection. Fundamentals and Speech Recognition System Robustness

    OpenAIRE

    Ramirez, J.; Gorriz, J. M.; Segura, J. C.

    2007-01-01

    This chapter has shown an overview of the main challenges in robust speech detection and a review of the state of the art and applications. VADs are frequently used in a number of applications including speech coding, speech enhancement and speech recognition. A precise VAD extracts a set of discriminative speech features from the noisy speech and formulates the decision in terms of well defined rule. The chapter has summarized three robust VAD methods that yield high speech/non-speech discri...

  10. Spontaneous baryogenesis in warm inflation

    OpenAIRE

    Brandenberger, Robert H.; Yamaguchi, Masahide

    2003-01-01

    We discuss spontaneous baryogenesis in the warm inflation scenario. In contrast with standard inflation models, radiation always exists in the warm inflation scenario, and the inflaton must be directly coupled to it. Also, the transition to the post-inflationary radiation dominated phase is smooth and the entropy is not significantly increased at the end of the period of inflation. In addition, after the period of warm inflation ends, the inflaton does not oscillate coherently but slowly roll...

  11. Spontaneous Splenic Rupture in Melanoma

    Directory of Open Access Journals (Sweden)

    Hadi Mirfazaelian

    2014-01-01

    Full Text Available Spontaneous rupture of spleen due to malignant melanoma is a rare situation, with only a few case reports in the literature. This study reports a previously healthy, 30-year-old man who came with chief complaint of acute abdominal pain to emergency room. On physical examination, abdominal tenderness and guarding were detected to be coincident with hypotension. Ultrasonography revealed mild splenomegaly with moderate free fluid in abdominopelvic cavity. Considering acute abdominal pain and hemodynamic instability, he underwent splenectomy with splenic rupture as the source of bleeding. Histologic examination showed diffuse infiltration by tumor. Immunohistochemical study (positive for S100, HMB45, and vimentin and negative for CK, CD10, CK20, CK7, CD30, LCA, EMA, and chromogranin confirmed metastatic malignant melanoma. On further questioning, there was a past history of a nasal dark skin lesion which was removed two years ago with no pathologic examination. Spontaneous (nontraumatic rupture of spleen is an uncommon situation and it happens very rarely due to neoplastic metastasis. Metastasis of malignant melanoma is one of the rare causes of the spontaneous rupture of spleen.

  12. Speech and Voice Response to a Levodopa Challenge in Late-Stage Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Margherita Fabbri

    2017-08-01

    Full Text Available BackgroundParkinson’s disease (PD patients are affected by hypokinetic dysarthria, characterized by hypophonia and dysprosody, which worsens with disease progression. Levodopa’s (l-dopa effect on quality of speech is inconclusive; no data are currently available for late-stage PD (LSPD.ObjectiveTo assess the modifications of speech and voice in LSPD following an acute l-dopa challenge.MethodLSPD patients [Schwab and England score <50/Hoehn and Yahr stage >3 (MED ON] performed several vocal tasks before and after an acute l-dopa challenge. The following was assessed: respiratory support for speech, voice quality, stability and variability, speech rate, and motor performance (MDS-UPDRS-III. All voice samples were recorded and analyzed by a speech and language therapist blinded to patients’ therapeutic condition using Praat 5.1 software.Results24/27 (14 men LSPD patients succeeded in performing voice tasks. Median age and disease duration of patients were 79 [IQR: 71.5–81.7] and 14.5 [IQR: 11–15.7] years, respectively. In MED OFF, respiratory breath support and pitch break time of LSPD patients were worse than the normative values of non-parkinsonian. A correlation was found between disease duration and voice quality (R = 0.51; p = 0.013 and speech rate (R = −0.55; p = 0.008. l-Dopa significantly improved MDS-UPDRS-III score (20%, with no effect on speech as assessed by clinical rating scales and automated analysis.ConclusionSpeech is severely affected in LSPD. Although l-dopa had some effect on motor performance, including axial signs, speech and voice did not improve. The applicability and efficacy of non-pharmacological treatment for speech impairment should be considered for speech disorder management in PD.

  13. Speech and Voice Response to a Levodopa Challenge in Late-Stage Parkinson's Disease.

    Science.gov (United States)

    Fabbri, Margherita; Guimarães, Isabel; Cardoso, Rita; Coelho, Miguel; Guedes, Leonor Correia; Rosa, Mario M; Godinho, Catarina; Abreu, Daisy; Gonçalves, Nilza; Antonini, Angelo; Ferreira, Joaquim J

    2017-01-01

    Parkinson's disease (PD) patients are affected by hypokinetic dysarthria, characterized by hypophonia and dysprosody, which worsens with disease progression. Levodopa's (l-dopa) effect on quality of speech is inconclusive; no data are currently available for late-stage PD (LSPD). To assess the modifications of speech and voice in LSPD following an acute l-dopa challenge. LSPD patients [Schwab and England score 3 (MED ON)] performed several vocal tasks before and after an acute l-dopa challenge. The following was assessed: respiratory support for speech, voice quality, stability and variability, speech rate, and motor performance (MDS-UPDRS-III). All voice samples were recorded and analyzed by a speech and language therapist blinded to patients' therapeutic condition using Praat 5.1 software. 24/27 (14 men) LSPD patients succeeded in performing voice tasks. Median age and disease duration of patients were 79 [IQR: 71.5-81.7] and 14.5 [IQR: 11-15.7] years, respectively. In MED OFF, respiratory breath support and pitch break time of LSPD patients were worse than the normative values of non-parkinsonian. A correlation was found between disease duration and voice quality ( R  = 0.51; p  = 0.013) and speech rate ( R  = -0.55; p  = 0.008). l-Dopa significantly improved MDS-UPDRS-III score (20%), with no effect on speech as assessed by clinical rating scales and automated analysis. Speech is severely affected in LSPD. Although l-dopa had some effect on motor performance, including axial signs, speech and voice did not improve. The applicability and efficacy of non-pharmacological treatment for speech impairment should be considered for speech disorder management in PD.

  14. The Nationwide Speech Project: A new corpus of American English dialects☆

    Science.gov (United States)

    Clopper, Cynthia G.; Pisoni, David B.

    2011-01-01

    Perceptual and acoustic research on dialect variation in the United States requires an appropriate corpus of spoken language materials. Existing speech corpora that include dialect variation are limited by poor recording quality, small numbers of talkers, and/or small samples of speech from each talker. The Nationwide Speech Project corpus was designed to contain a large amount of speech produced by male and female talkers representing the primary regional varieties of American English. Five male and five female talkers from each of six dialect regions in the United States were recorded reading words, sentences, passages, and in interviews with an experimenter, using high quality digital recording equipment in a sound-attenuated booth. The resulting corpus contains nearly an hour of speech from each of the 60 talkers that can be used in future research on the perception and production of dialect variation. PMID:21423815

  15. Behavioral and neurobiological correlates of childhood apraxia of speech in Italian children.

    Science.gov (United States)

    Chilosi, Anna Maria; Lorenzini, Irene; Fiori, Simona; Graziosi, Valentina; Rossi, Giuseppe; Pasquariello, Rosa; Cipriani, Paola; Cioni, Giovanni

    2015-11-01

    Childhood apraxia of speech (CAS) is a neurogenic Speech Sound Disorder whose etiology and neurobiological correlates are still unclear. In the present study, 32 Italian children with idiopathic CAS underwent a comprehensive speech and language, genetic and neuroradiological investigation aimed to gather information on the possible behavioral and neurobiological markers of the disorder. The results revealed four main aggregations of behavioral symptoms that indicate a multi-deficit disorder involving both motor-speech and language competence. Six children presented with chromosomal alterations. The familial aggregation rate for speech and language difficulties and the male to female ratio were both very high in the whole sample, supporting the hypothesis that genetic factors make substantial contribution to the risk of CAS. As expected in accordance with the diagnosis of idiopathic CAS, conventional MRI did not reveal macrostructural pathogenic neuroanatomical abnormalities, suggesting that CAS may be due to brain microstructural alterations. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  17. Represented Speech in Qualitative Health Research

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2017-01-01

    Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

  18. Speech Recognition: Its Place in Business Education.

    Science.gov (United States)

    Szul, Linda F.; Bouder, Michele

    2003-01-01

    Suggests uses of speech recognition devices in the classroom for students with disabilities. Compares speech recognition software packages and provides guidelines for selection and teaching. (Contains 14 references.) (SK)

  19. Speech input interfaces for anaesthesia records

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Andersen, Henning Boje

    2009-01-01

    Speech recognition as a medical transcript tool is now common in hospitals and is steadily increasing......Speech recognition as a medical transcript tool is now common in hospitals and is steadily increasing...

  20. Modeling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Dau, Torsten

    2012-01-01

    by the normal as well as impaired auditory system. Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII......) in conditions with nonlinearly processed speech. Instead of considering the reduction of the temporal modulation energy as the intelligibility metric, as assumed in the STI, the sEPSM applies the signal-to-noise ratio in the envelope domain (SNRenv). This metric was shown to be the key for predicting...... the intelligibility of reverberant speech as well as noisy speech processed by spectral subtraction. However, the sEPSM cannot account for speech subjected to phase jitter, a condition in which the spectral structure of speech is destroyed, while the broadband temporal envelope is kept largely intact. In contrast...

  1. The Role of Visual Speech Information in Supporting Perceptual Learning of Degraded Speech

    Science.gov (United States)

    Wayne, Rachel V.; Johnsrude, Ingrid S.

    2012-01-01

    Following cochlear implantation, hearing-impaired listeners must adapt to speech as heard through their prosthesis. Visual speech information (VSI; the lip and facial movements of speech) is typically available in everyday conversation. Here, we investigate whether learning to understand a popular auditory simulation of speech as transduced by a…

  2. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  3. The treatment of apraxia of speech : Speech and music therapy, an innovative joint effort

    NARCIS (Netherlands)

    Hurkmans, Josephus Johannes Stephanus

    2016-01-01

    Apraxia of Speech (AoS) is a neurogenic speech disorder. A wide variety of behavioural methods have been developed to treat AoS. Various therapy programmes use musical elements to improve speech production. A unique therapy programme combining elements of speech therapy and music therapy is called

  4. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  5. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  6. Visual context enhanced. The joint contribution of iconic gestures and visible speech to degraded speech comprehension.

    NARCIS (Netherlands)

    Drijvers, L.; Özyürek, A.

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech

  7. Inner Speech's Relationship with Overt Speech in Poststroke Aphasia

    Science.gov (United States)

    Stark, Brielle C.; Geva, Sharon; Warburton, Elizabeth A.

    2017-01-01

    Purpose: Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech…

  8. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  9. THE ONTOGENESIS OF SPEECH DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. E. Braudo

    2017-01-01

    Full Text Available The purpose of this article is to acquaint the specialists, working with children having developmental disorders, with age-related norms for speech development. Many well-known linguists and psychologists studied speech ontogenesis (logogenesis. Speech is a higher mental function, which integrates many functional systems. Speech development in infants during the first months after birth is ensured by the innate hearing and emerging ability to fix the gaze on the face of an adult. Innate emotional reactions are also being developed during this period, turning into nonverbal forms of communication. At about 6 months a baby starts to pronounce some syllables; at 7–9 months – repeats various sounds combinations, pronounced by adults. At 10–11 months a baby begins to react on the words, referred to him/her. The first words usually appear at an age of 1 year; this is the start of the stage of active speech development. At this time it is acceptable, if a child confuses or rearranges sounds, distorts or misses them. By the age of 1.5 years a child begins to understand abstract explanations of adults. Significant vocabulary enlargement occurs between 2 and 3 years; grammatical structures of the language are being formed during this period (a child starts to use phrases and sentences. Preschool age (3–7 y. o. is characterized by incorrect, but steadily improving pronunciation of sounds and phonemic perception. The vocabulary increases; abstract speech and retelling are being formed. Children over 7 y. o. continue to improve grammar, writing and reading skills. The described stages may not have strict age boundaries, as soon as they are dependent not only on environment, but also on the child’s mental constitution, heredity and character.

  10. Hate speech, report 1. Research on the nature and extent of hate speech

    OpenAIRE

    Nadim, Marjan; Fladmoe, Audun

    2016-01-01

    The purpose of this report is to gather research-based knowledge concerning: • the extent of online hate speech • which groups in society are particularly subjected to online hate speech • who produces hate speech, and what motivates them Hate speech is commonly understood as any speech that is persecutory, degrading or discriminatory on grounds of the recipient’s minority group identity. To be defined as hate speech, the speech must be conveyed publicly or in the presence of others and be di...

  11. A Chimpanzee Recognizes Synthetic Speech With Significantly Reduced Acoustic Cues to Phonetic Content

    Science.gov (United States)

    Heimbauer, Lisa A.; Beran, Michael J.; Owren, Michael J.

    2011-01-01

    Summary A long-standing debate concerns whether humans are specialized for speech perception [1–7], which some researchers argue is demonstrated by the ability to understand synthetic speech with significantly reduced acoustic cues to phonetic content [2–4,7]. We tested a chimpanzee (Pan troglodytes) that recognizes 128 spoken words [8,9], asking whether she could understand such speech. Three experiments presented 48 individual words, with the animal selecting a corresponding visuo-graphic symbol from among four alternatives. Experiment 1 tested spectrally reduced, noise-vocoded (NV) synthesis, originally developed to simulate input received by human cochlear-implant users [10]. Experiment 2 tested “impossibly unspeechlike” [3] sine-wave (SW) synthesis, which reduces speech to just three moving tones [11]. Although receiving only intermittent and non-contingent reward, the chimpanzee performed well above chance level, including when hearing synthetic versions for the first time. Recognition of SW words was least accurate, but improved in Experiment 3 when natural words in the same session were rewarded. The chimpanzee was more accurate with NV than SW versions, as were 32 human participants hearing these items. The chimpanzee's ability to spontaneously recognize acoustically reduced synthetic words suggests that experience rather than specialization is critical for speech-perception capabilities that some have suggested are uniquely human [12–14]. PMID:21723125

  12. A chimpanzee recognizes synthetic speech with significantly reduced acoustic cues to phonetic content.

    Science.gov (United States)

    Heimbauer, Lisa A; Beran, Michael J; Owren, Michael J

    2011-07-26

    A long-standing debate concerns whether humans are specialized for speech perception, which some researchers argue is demonstrated by the ability to understand synthetic speech with significantly reduced acoustic cues to phonetic content. We tested a chimpanzee (Pan troglodytes) that recognizes 128 spoken words, asking whether she could understand such speech. Three experiments presented 48 individual words, with the animal selecting a corresponding visuographic symbol from among four alternatives. Experiment 1 tested spectrally reduced, noise-vocoded (NV) synthesis, originally developed to simulate input received by human cochlear-implant users. Experiment 2 tested "impossibly unspeechlike" sine-wave (SW) synthesis, which reduces speech to just three moving tones. Although receiving only intermittent and noncontingent reward, the chimpanzee performed well above chance level, including when hearing synthetic versions for the first time. Recognition of SW words was least accurate but improved in experiment 3 when natural words in the same session were rewarded. The chimpanzee was more accurate with NV than SW versions, as were 32 human participants hearing these items. The chimpanzee's ability to spontaneously recognize acoustically reduced synthetic words suggests that experience rather than specialization is critical for speech-perception capabilities that some have suggested are uniquely human. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Speech therapy for children with dysarthria acquired before three years of age.

    Science.gov (United States)

    Pennington, Lindsay; Parker, Naomi K; Kelly, Helen; Miller, Nick

    2016-07-18

    included a reliability check in which a second review author independently checked a random sample comprising 15% of all identified reports. We planned that two review authors would independently assess the quality and extract data from eligible studies. No randomised controlled trials or group studies were identified. This review found no evidence from randomised trials of the effectiveness of speech and language therapy interventions to improve the speech of children with early acquired dysarthria. Rigorous, fully powered randomised controlled trials are needed to investigate if the positive changes in children's speech observed in phase I and phase II studies are generalisable to the population of children with early acquired dysarthria served by speech and language therapy services. Research should examine change in children's speech production and intelligibility. It must also investigate children's participation in social and educational activities, and their quality of life, as well as the cost and acceptability of interventions.

  14. Discriminative learning for speech recognition

    CERN Document Server

    He, Xiadong

    2008-01-01

    In this book, we introduce the background and mainstream methods of probabilistic modeling and discriminative parameter optimization for speech recognition. The specific models treated in depth include the widely used exponential-family distributions and the hidden Markov model. A detailed study is presented on unifying the common objective functions for discriminative learning in speech recognition, namely maximum mutual information (MMI), minimum classification error, and minimum phone/word error. The unification is presented, with rigorous mathematical analysis, in a common rational-functio

  15. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  16. Speech-To-Text Conversion STT System Using Hidden Markov Model HMM

    Directory of Open Access Journals (Sweden)

    Su Myat Mon

    2015-06-01

    Full Text Available Abstract Speech is an easiest way to communicate with each other. Speech processing is widely used in many applications like security devices household appliances cellular phones ATM machines and computers. The human computer interface has been developed to communicate or interact conveniently for one who is suffering from some kind of disabilities. Speech-to-Text Conversion STT systems have a lot of benefits for the deaf or dumb people and find their applications in our daily lives. In the same way the aim of the system is to convert the input speech signals into the text output for the deaf or dumb students in the educational fields. This paper presents an approach to extract features by using Mel Frequency Cepstral Coefficients MFCC from the speech signals of isolated spoken words. And Hidden Markov Model HMM method is applied to train and test the audio files to get the recognized spoken word. The speech database is created by using MATLAB.Then the original speech signals are preprocessed and these speech samples are extracted to the feature vectors which are used as the observation sequences of the Hidden Markov Model HMM recognizer. The feature vectors are analyzed in the HMM depending on the number of states.

  17. Voice, speech, and laryngeal features of primary Sjögren's syndrome.

    Science.gov (United States)

    Heller, Amanda; Tanner, Kristine; Roy, Nelson; Nissen, Shawn L; Merrill, Ray M; Miller, Karla L; Houtz, Daniel R; Ellerston, Julia; Kendall, Katherine

    2014-11-01

    This study examined voice, speech, and laryngeal characteristics in primary Sjögren's syndrome (pSS). Eleven patients (10 female, 1 male; mean [SD] age = 57 [14] years) from The University of Utah Division of Rheumatology provided connected speech and sustained vowel samples. Analyses included the Multi-Dimensional Voice Profile, the Analysis of Dysphonia in Speech and Voice, and dysphonia severity, speech clarity, and videolaryngostroboscopy ratings. Shimmer, amplitude perturbation quotient, and average fundamental frequency differed significantly from normative values (P speech (mean [SD] = 20.26 [8.36]) and sustained vowels (mean [SD] = 16.91 [11.08]). Ratings of dysphonia severity and speech clarity using 10-cm visual analog scales suggested mild-to-moderate dysphonia in connected speech (mean [SD] = 2.11 [1.72]) and sustained vowels (mean [SD] = 3.13 [2.20]) and mildly reduced speech clarity (mean [SD] = 1.46 [1.36]). Videolaryngostroboscopic ratings indicated mild-to-moderate dryness and mild reductions in overall laryngeal function. Voice Handicap Index scores indicated mild-to-moderate voice symptoms (mean [SD] = 43 [23]). Individuals with pSS may experience dysphonia and articulatory imprecision, typically in the mild-to-moderate range. These findings have implications for diagnostic and referral practices in pSS. © The Author(s) 2014.

  18. Data-driven subclassification of speech sound disorders in preschool children.

    Science.gov (United States)

    Vick, Jennell C; Campbell, Thomas F; Shriberg, Lawrence D; Green, Jordan R; Truemper, Klaus; Rusiewicz, Heather Leavy; Moore, Christopher A

    2014-12-01

    The purpose of the study was to determine whether distinct subgroups of preschool children with speech sound disorders (SSD) could be identified using a subgroup discovery algorithm (SUBgroup discovery via Alternate Random Processes, or SUBARP). Of specific interest was finding evidence of a subgroup of SSD exhibiting performance consistent with atypical speech motor control. Ninety-seven preschool children with SSD completed speech and nonspeech tasks. Fifty-three kinematic, acoustic, and behavioral measures from these tasks were input to SUBARP. Two distinct subgroups were identified from the larger sample. The 1st subgroup (76%; population prevalence estimate = 67.8%-84.8%) did not have characteristics that would suggest atypical speech motor control. The 2nd subgroup (10.3%; population prevalence estimate = 4.3%-16.5%) exhibited significantly higher variability in measures of articulatory kinematics and poor ability to imitate iambic lexical stress, suggesting atypical speech motor control. Both subgroups were consistent with classes of SSD in the Speech Disorders Classification System (SDCS; Shriberg et al., 2010a). Characteristics of children in the larger subgroup were consistent with the proportionally large SDCS class termed speech delay; characteristics of children in the smaller subgroup were consistent with the SDCS subtype termed motor speech disorder-not otherwise specified. The authors identified candidate measures to identify children in each of these groups.

  19. The Relationship Between Apraxia of Speech and Oral Apraxia: Association or Dissociation?

    Science.gov (United States)

    Whiteside, Sandra P; Dyson, Lucy; Cowell, Patricia E; Varley, Rosemary A

    2015-11-01

    Acquired apraxia of speech (AOS) is a motor speech disorder that affects the implementation of articulatory gestures and the fluency and intelligibility of speech. Oral apraxia (OA) is an impairment of nonspeech volitional movement. Although many speakers with AOS also display difficulties with volitional nonspeech oral movements, the relationship between the 2 conditions is unclear. This study explored the relationship between speech and volitional nonspeech oral movement impairment in a sample of 50 participants with AOS. We examined levels of association and dissociation between speech and OA using a battery of nonspeech oromotor, speech, and auditory/aphasia tasks. There was evidence of a moderate positive association between the 2 impairments across participants. However, individual profiles revealed patterns of dissociation between the 2 in a few cases, with evidence of double dissociation of speech and oral apraxic impairment. We discuss the implications of these relationships for models of oral motor and speech control. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Individual Differences in Frequency of Inner Speech: Differential Relations with Cognitive and Non-cognitive Factors

    Directory of Open Access Journals (Sweden)

    Xuezhu Ren

    2016-11-01

    Full Text Available Inner speech plays a crucial role in behavioral regulation and the use of inner speech is very common among adults. However, less is known about individual differences in the frequency of inner speech use and about the underlying processes that may explain why people exhibit individual differences in the frequency of inner speech use. This study was conducted to investigate how individual differences in the frequency of inner speech use are related to cognitive and non-cognitive factors. Four functions of inner speech including self-criticism, self-reinforcement, self-management, and social assessment measured by an adapted version of Brinthaupt’s Self-Talk Scale were examined. The cognitive factors that were considered included executive functioning and complex reasoning and the non-cognitive factors consisted of trait anxiety and impulsivity. Data were collected from a large Chinese sample. Results revealed that anxiety and impulsivity were mainly related to the frequency of the affective function of inner speech (self-criticism and self-reinforcement and executive functions and complex reasoning were mainly related to the frequency of the cognitive, self-regulatory function of inner speech (self-management.

  1. Comparison of speech performance in labial and lingual orthodontic patients: A prospective study

    Science.gov (United States)

    Rai, Ambesh Kumar; Rozario, Joe E.; Ganeshkar, Sanjay V.

    2014-01-01

    Background: The intensity and duration of speech difficulty inherently associated with lingual therapy is a significant issue of concern in orthodontics. This study was designed to evaluate and to compare the duration of changes in speech between labial and lingual orthodontics. Materials and Methods: A prospective longitudinal clinical study was designed to assess speech of 24 patients undergoing labial or lingual orthodontic treatment. An objective spectrographic evaluation of/s/sound was done using software PRAAT version 5.0.47, a semiobjective auditive evaluation of articulation was done by four speech pathologists and a subjective assessment of speech was done by four laypersons. The tests were performed before (T1), within 24 h (T2), after 1 week (T3) and after 1 month (T4) of the start of therapy. The Mann-Whitney U-test for independent samples was used to assess the significance difference between the labial and lingual appliances. A speech alteration with P appliance systems caused a comparable speech difficulty immediately after bonding (T2). Although the speech recovered within a week in the labial group (T3), the lingual group continued to experience discomfort even after a month (T4). PMID:25540661

  2. Speech in 10-Year-Olds Born With Cleft Lip and Palate: What Do Peers Say?

    Science.gov (United States)

    Nyberg, Jill; Havstam, Christina

    2016-09-01

    The aim of this study was to explore how 10-year-olds describe speech and communicative participation in children born with unilateral cleft lip and palate in their own words, whether they perceive signs of velopharyngeal insufficiency (VPI) and articulation errors of different degrees, and if so, which terminology they use. Methods/Participants: Nineteen 10-year-olds participated in three focus group interviews where they listened to 10 to 12 speech samples with different types of cleft speech characteristics assessed by speech and language pathologists (SLPs) and described what they heard. The interviews were transcribed and analyzed with qualitative content analysis. The analysis resulted in three interlinked categories encompassing different aspects of speech, personality, and social implications: descriptions of speech, thoughts on causes and consequences, and emotional reactions and associations. Each category contains four subcategories exemplified with quotes from the children's statements. More pronounced signs of VPI were perceived but referred to in terms relevant to 10-year-olds. Articulatory difficulties, even minor ones, were noted. Peers reflected on the risk to teasing and bullying and on how children with impaired speech might experience their situation. The SLPs and peers did not agree on minor signs of VPI, but they were unanimous in their analysis of clinically normal and more severely impaired speech. Articulatory impairments may be more important to treat than minor signs of VPI based on what peers say.

  3. Role of auditory feedback in speech produced by cochlear implanted adults and children

    Science.gov (United States)

    Bharadwaj, Sneha V.; Tobey, Emily A.; Assmann, Peter F.; Katz, William F.

    2002-05-01

    A prominent theory of speech production proposes that speech segments are largely controlled by reference to an internal model, with minimal reliance on auditory feedback. This theory also maintains that suprasegmental aspects of speech are directly regulated by auditory feedback. Accordingly, if a talker is briefly deprived of auditory feedback speech segments should not be affected, but suprasegmental properties should show significant change. To test this prediction, comparisons were made between speech samples obtained from cochlear implant users who repeated words under two conditions (1) implant device turned ON, and (2) implant switched OFF immediately before the repetition of each word. To determine whether producing unfamiliar speech requires greater reliance on auditory feedback than producing familiar speech, English and French words were elicited from English-speaking subjects. Subjects were congenitally deaf children (n=4) and adventitiously deafened adults (n=4). Vowel fundamental frequency and formant frequencies, vowel and syllable durations, and fricative spectral moments were analyzed. Preliminary data only partially confirm the predictions, in that both segmental and suprasegmental aspects of speech were significantly modified in the absence of auditory feedback. Modifications were greater for French compared to English words, suggesting greater reliance on auditory feedback for unfamiliar words. [Work supported by NIDCD.

  4. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    Science.gov (United States)

    Holzrichter, J.F.; Ng, L.C.

    1998-03-17

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.

  5. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    International Nuclear Information System (INIS)

    Holzrichter, J.F.; Ng, L.C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs

  6. Glioblastoma Multiforme Presenting as Spontaneous Intracerebral Hemorrhage

    Directory of Open Access Journals (Sweden)

    Cagatay Ozdol

    2014-06-01

    Full Text Available Brain tumors with concomitant intracerebral hemorrhage are rarely encountered. Hemorrhage as the initial presentation of a brain tumour may pose some diagnostic problems, especially if the tumour is small or the hemorrhage is abundant. We present a 47-year-old man who admitted to the emergency department with sudden onset headache, right blurred vision and gait disturbance. A non-contrast cranial computerized tomography scan performed immediately after his admission revealed a well circumscribed right occipitoparietal haematoma with intense peripheral edema causing compression of the ipsilateral ventricles. On 6th hour of his admission the patient%u2019s neurological status deteriorated and he subsequently underwent emergent craniotomy and microsurgical evacuation of the haematoma. The histopathological examination of the mass was consistent with a glioblastoma multiforme. Neoplasms may be hidden behind each case of spontaneous intracerebral hemorrhage. Histological sampling and investigation is mandatory in the presence of preoperative radiological features suggesting a neoplasm.

  7. Development of a speech-based dialogue system for report dictation and machine control in the endoscopic laboratory.

    Science.gov (United States)

    Molnar, B; Gergely, J; Toth, G; Pronai, L; Zagoni, T; Papik, K; Tulassay, Z

    2000-01-01

    Reporting and machine control based on speech technology can enhance work efficiency in the gastrointestinal endoscopy laboratory. The status and activation of endoscopy laboratory equipment were described as a multivariate parameter and function system. Speech recognition, text evaluation and action definition engines were installed. Special programs were developed for the grammatical analysis of command sentences, and a rule-based expert system for the definition of machine answers. A speech backup engine provides feedback to the user. Techniques were applied based on the "Hidden Markov" model of discrete word, user-independent speech recognition and on phoneme-based speech synthesis. Speech samples were collected from three male low-tone investigators. The dictation module and machine control modules were incorporated in a personal computer (PC) simulation program. Altogether 100 unidentified patient records were analyzed. The sentences were grouped according to keywords, which indicate the main topics of a gastrointestinal endoscopy report. They were: "endoscope", "esophagus", "cardia", "fundus", "corpus", "antrum", "pylorus", "bulbus", and "postbulbar section", in addition to the major pathological findings: "erosion", "ulceration", and "malignancy". "Biopsy" and "diagnosis" were also included. We implemented wireless speech communication control commands for equipment including an endoscopy unit, video, monitor, printer, and PC. The recognition rate was 95%. Speech technology may soon become an integrated part of our daily routine in the endoscopy laboratory. A central speech and laboratory computer could be the most efficient alternative to having separate speech recognition units in all items of equipment.

  8. Speech Algorithm Optimization at 16 KBPS.

    Science.gov (United States)

    1980-09-30

    9. M. D. Paez and T. H. Glisson, "Minimum Mean Squared-Error Quantization in Speech PCM and DPCM Systems," IEEE Trans. Communications, Vol. COM-20...34 IEEE Trans. Acoustic, Speech and Signal Processing, Vol. ASSP-27, June 1979. 13. N. S. Jayant, "Digital Coding of Speech Waveform: PCM, DPCM , and DM

  9. Speech Segmentation Using Bayesian Autoregressive Changepoint Detector

    Directory of Open Access Journals (Sweden)

    P. Sovka

    1998-12-01

    Full Text Available This submission is devoted to the study of the Bayesian autoregressive changepoint detector (BCD and its use for speech segmentation. Results of the detector application to autoregressive signals as well as to real speech are given. BCD basic properties are described and discussed. The novel two-step algorithm consisting of cepstral analysis and BCD for automatic speech segmentation is suggested.

  10. Development of binaural speech transmission index

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Drullman, R.

    2006-01-01

    Although the speech transmission index (STI) is a well-accepted and standardized method for objective prediction of speech intelligibility in a wide range of-environments and applications, it is essentially a monaural model. Advantages of binaural hearing to the intelligibility of speech are

  11. Interventions for Speech Sound Disorders in Children

    Science.gov (United States)

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  12. Regulation of speech in multicultural societies: introduction

    NARCIS (Netherlands)

    Maussen, M.; Grillo, R.

    2014-01-01

    What to do about speech which vilifies or defames members of minorities on the grounds of their ethnic or religious identity or their sexuality? How to respond to such speech, which may directly or indirectly cause harm, while taking into account the principle of free speech, has been much debated

  13. Speech Synthesis Applied to Language Teaching.

    Science.gov (United States)

    Sherwood, Bruce

    1981-01-01

    The experimental addition of speech output to computer-based Esperanto lessons using speech synthesized from text is described. Because of Esperanto's phonetic spelling and simple rhythm, it is particularly easy to describe the mechanisms of Esperanto synthesis. Attention is directed to how the text-to-speech conversion is performed and the ways…

  14. Cognitive functions in Childhood Apraxia of Speech

    NARCIS (Netherlands)

    Nijland, L.; Terband, H.; Maassen, B.

    2015-01-01

    Purpose: Childhood Apraxia of Speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional

  15. Cognitive Functions in Childhood Apraxia of Speech

    Science.gov (United States)

    Nijland, Lian; Terband, Hayo; Maassen, Ben

    2015-01-01

    Purpose: Childhood apraxia of speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional problems. Method: Cognitive functions were investigated…

  16. Epoch-based analysis of speech signals

    Indian Academy of Sciences (India)

    Epoch sequence is useful to manipulate prosody in speech synthesis applications. Accurate estimation of epochs helps in characterizing voice quality features. Epoch extraction also helps in speech enhancement and multispeaker separation. In this tutorial article, the importance of epochs for speech analysis is discussed, ...

  17. Speech and Debate as Civic Education

    Science.gov (United States)

    Hogan, J. Michael; Kurr, Jeffrey A.; Johnson, Jeremy D.; Bergmaier, Michael J.

    2016-01-01

    In light of the U.S. Senate's designation of March 15, 2016 as "National Speech and Debate Education Day" (S. Res. 398, 2016), it only seems fitting that "Communication Education" devote a special section to the role of speech and debate in civic education. Speech and debate have been at the heart of the communication…

  18. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  19. DEVELOPMENT AND DISORDERS OF SPEECH IN CHILDHOOD.

    Science.gov (United States)

    KARLIN, ISAAC W.; AND OTHERS

    THE GROWTH, DEVELOPMENT, AND ABNORMALITIES OF SPEECH IN CHILDHOOD ARE DESCRIBED IN THIS TEXT DESIGNED FOR PEDIATRICIANS, PSYCHOLOGISTS, EDUCATORS, MEDICAL STUDENTS, THERAPISTS, PATHOLOGISTS, AND PARENTS. THE NORMAL DEVELOPMENT OF SPEECH AND LANGUAGE IS DISCUSSED, INCLUDING THEORIES ON THE ORIGIN OF SPEECH IN MAN AND FACTORS INFLUENCING THE NORMAL…

  20. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    The second driving force is the impetus being provided by both government and industry for technologies to help break down domestic and international language barriers, these also being barriers to the expansion of policy and commerce. Speech-to-speech and speech-to-text translation are thus emerging as key ...

  1. Speech-in-Speech Recognition: A Training Study

    Science.gov (United States)

    Van Engen, Kristin J.

    2012-01-01

    This study aims to identify aspects of speech-in-noise recognition that are susceptible to training, focusing on whether listeners can learn to adapt to target talkers ("tune in") and learn to better cope with various maskers ("tune out") after short-term training. Listeners received training on English sentence recognition in…

  2. SPEECH ACT ANALYSIS: HOSNI MUBARAK'S SPEECHES IN PRE ...

    African Journals Online (AJOL)

    enerco

    Agbedo, C. U. Speech Act Analysis of Political discourse in the Nigerian Print Media in discourse. In Awka Journal of Languages & Linguistics Vol. 3, 2008a. Bloom field, L. Language. London: Allen & Unwin, 1933. Kay, M.W. Merriam Websters Collegiate Thesaurus, Masschusetts: Merrian. Webster Inc. 1988. Mbagwu, D.U. ...

  3. Relationship between Speech Intelligibility and Speech Comprehension in Babble Noise

    Science.gov (United States)

    Fontan, Lionel; Tardieu, Julien; Gaillard, Pascal; Woisard, Virginie; Ruiz, Robert

    2015-01-01

    Purpose: The authors investigated the relationship between the intelligibility and comprehension of speech presented in babble noise. Method: Forty participants listened to French imperative sentences (commands for moving objects) in a multitalker babble background for which intensity was experimentally controlled. Participants were instructed to…

  4. Distinct processing of ambiguous speech in people with non-clinical auditory verbal hallucinations.

    Science.gov (United States)

    Alderson-Day, Ben; Lima, César F; Evans, Samuel; Krishnan, Saloni; Shanmugalingam, Pradheep; Fernyhough, Charles; Scott, Sophie K

    2017-09-01

    Auditory verbal hallucinations (hearing voices) are typically associated with psychosis, but a minority of the general population also experience them frequently and without distress. Such 'non-clinical' experiences offer a rare and unique opportunity to study hallucinations apart from confounding clinical factors, thus allowing for the identification of symptom-specific mechanisms. Recent theories propose that hallucinations result from an imbalance of prior expectation and sensory information, but whether such an imbalance also influences auditory-perceptual processes remains unknown. We examine for the first time the cortical processing of ambiguous speech in people without psychosis who regularly hear voices. Twelve non-clinical voice-hearers and 17 matched controls completed a functional magnetic resonance imaging scan while passively listening to degraded speech ('sine-wave' speech), that was either potentially intelligible or unintelligible. Voice-hearers reported recognizing the presence of speech in the stimuli before controls, and before being explicitly informed of its intelligibility. Across both groups, intelligible sine-wave speech engaged a typical left-lateralized speech processing network. Notably, however, voice-hearers showed stronger intelligibility responses than controls in the dorsal anterior cingulate cortex and in the superior frontal gyrus. This suggests an enhanced involvement of attention and sensorimotor processes, selectively when speech was potentially intelligible. Altogether, these behavioural and neural findings indicate that people with hallucinatory experiences show distinct responses to meaningful auditory stimuli. A greater weighting towards prior knowledge and expectation might cause non-veridical auditory sensations in these individuals, but it might also spontaneously facilitate perceptual processing where such knowledge is required. This has implications for the understanding of hallucinations in clinical and non

  5. Relative Contributions of the Dorsal vs. Ventral Speech Streams to Speech Perception are Context Dependent: a lesion study

    Directory of Open Access Journals (Sweden)

    Corianne Rogalsky

    2014-04-01

    Full Text Available The neural basis of speech perception has been debated for over a century. While it is generally agreed that the superior temporal lobes are critical for the perceptual analysis of speech, a major current topic is whether the motor system contributes to speech perception, with several conflicting findings attested. In a dorsal-ventral speech stream framework (Hickok & Poeppel 2007, this debate is essentially about the roles of the dorsal versus ventral speech processing streams. A major roadblock in characterizing the neuroanatomy of speech perception is task-specific effects. For example, much of the evidence for dorsal stream involvement comes from syllable discrimination type tasks, which have been found to behaviorally doubly dissociate from auditory comprehension tasks (Baker et al. 1981. Discrimination task deficits could be a result of difficulty perceiving the sounds themselves, which is the typical assumption, or it could be a result of failures in temporary maintenance of the sensory traces, or the comparison and/or the decision process. Similar complications arise in perceiving sentences: the extent of inferior frontal (i.e. dorsal stream activation during listening to sentences increases as a function of increased task demands (Love et al. 2006. Another complication is the stimulus: much evidence for dorsal stream involvement uses speech samples lacking semantic context (CVs, non-words. The present study addresses these issues in a large-scale lesion-symptom mapping study. 158 patients with focal cerebral lesions from the Mutli-site Aphasia Research Consortium underwent a structural MRI or CT scan, as well as an extensive psycholinguistic battery. Voxel-based lesion symptom mapping was used to compare the neuroanatomy involved in the following speech perception tasks with varying phonological, semantic, and task loads: (i two discrimination tasks of syllables (non-words and words, respectively, (ii two auditory comprehension tasks

  6. Longitudinal decline in speech production in Parkinson's disease spectrum disorders.

    Science.gov (United States)

    Ash, Sharon; Jester, Charles; York, Collin; Kofman, Olga L; Langey, Rachel; Halpin, Amy; Firn, Kim; Dominguez Perez, Sophia; Chahine, Lama; Spindler, Meredith; Dahodwala, Nabila; Irwin, David J; McMillan, Corey; Weintraub, Daniel; Grossman, Murray

    2017-08-01

    We examined narrative speech production longitudinally in non-demented (n=15) and mildly demented (n=8) patients with Parkinson's disease spectrum disorder (PDSD), and we related increasing impairment to structural brain changes in specific language and motor regions. Patients provided semi-structured speech samples, describing a standardized picture at two time points (mean±SD interval=38±24months). The recorded speech samples were analyzed for fluency, grammar, and informativeness. PDSD patients with dementia exhibited significant decline in their speech, unrelated to changes in overall cognitive or motor functioning. Regression analysis in a subset of patients with MRI scans (n=11) revealed that impaired language performance at Time 2 was associated with reduced gray matter (GM) volume at Time 1 in regions of interest important for language functioning but not with reduced GM volume in motor brain areas. These results dissociate language and motor systems and highlight the importance of non-motor brain regions for declining language in PDSD. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Sensorimotor speech disorders in Parkinson's disease: Programming and execution deficits

    Directory of Open Access Journals (Sweden)

    Karin Zazo Ortiz

    Full Text Available ABSTRACT Introduction: Dysfunction in the basal ganglia circuits is a determining factor in the physiopathology of the classic signs of Parkinson's disease (PD and hypokinetic dysarthria is commonly related to PD. Regarding speech disorders associated with PD, the latest four-level framework of speech complicates the traditional view of dysarthria as a motor execution disorder. Based on findings that dysfunctions in basal ganglia can cause speech disorders, and on the premise that the speech deficits seen in PD are not related to an execution motor disorder alone but also to a disorder at the motor programming level, the main objective of this study was to investigate the presence of sensorimotor disorders of programming (besides the execution disorders previously described in PD patients. Methods: A cross-sectional study was conducted in a sample of 60 adults matched for gender, age and education: 30 adult patients diagnosed with idiopathic PD (PDG and 30 healthy adults (CG. All types of articulation errors were reanalyzed to investigate the nature of these errors. Interjections, hesitations and repetitions of words or sentences (during discourse were considered typical disfluencies; blocking, episodes of palilalia (words or syllables were analyzed as atypical disfluencies. We analysed features including successive self-initiated trial, phoneme distortions, self-correction, repetition of sounds and syllables, prolonged movement transitions, additions or omissions of sounds and syllables, in order to identify programming and/or execution failures. Orofacial agility was also investigated. Results: The PDG had worse performance on all sensorimotor speech tasks. All PD patients had hypokinetic dysarthria. Conclusion: The clinical characteristics found suggest both execution and programming sensorimotor speech disorders in PD patients.

  8. Multisensory integration of speech sounds with letters vs. visual speech: only visual speech induces the mismatch negativity.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean

    2018-03-14

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Effect of subthalamic stimulation on voice and speech in Parkinson´s disease: for the better or worse ?

    Directory of Open Access Journals (Sweden)

    Sabine eSkodda

    2014-01-01

    Full Text Available Background: Deep brain stimulation of the subthalamic nucleus, although highly effective for the treatment of motor impairment in Parkinson´s disease, can induce speech deterioration in a subgroup of patients. The aim of the current study was to survey 1 if there are distinctive stimulation effects on the different parameters of voice and speech and 2 if there is a special pattern of preexisting speech abnormalities indicating a risk for further worsening under stimulation. Methods: N = 38 patients with Parkinson´s disease had to perform a speech test without medication with stimulation ON and OFF. Speech samples were analysed: 1 according to a four-dimensional perceptual speech score and 2 by acoustic analysis to obtain quantifiable measures of distinctive speech parameters.Results: Quality of voice was ameliorated with stimulation ON, and there were trends to increased loudness and better pitch variability. N = 8 patients featured a deterioration of speech with stimulation ON, caused by worsening of articulation or/and fluency. These patients had more severe overall speech impairment with characteristic features of articulatory slurring and articulatory acceleration already under StimOFF condition.Conclusion: The influence of subthalamic stimulation on Parkinsonian speech differs considerably between individual patients, however, there is a trend to amelioration of voice quality and prosody. Patients with stimulation-associated speech deterioration featured higher overall speech impairment and showed a distinctive pattern of articulatory abnormalities at baseline. Further investigations to confirm these preliminary findings are necessary to allow neurologists to pre-surgically estimate the individual risk of deterioration of speech under stimulation.

  10. Atendimento fonoaudiológico intensivo em pacientes operados de fissura labiopalatina: relato de casos Intensive speech therapy in patients operated for cleft lip and palate: case report

    Directory of Open Access Journals (Sweden)

    Maria do Rosário Ferreira Lima

    2007-09-01

    articulatory disorders, were engaged in an intensive summer training program. For each patient, intervention was carried out daily for three hours, during 10 days, divided into individual and group therapy. At the beginning and at the end of that period, patients were assessed by a speech therapist who did not participate in the sessions. A sample of spontaneous speech, counting from 1 to 20 and repetition of a list of words and sentences with oral occlusive and fricative phonemes were recorded on video tape. All patients showed satisfactory development with the intensive therapy program, adapting the worked phonemes in directed speech, but still requiring follow-up therapy to automatize their production. Intensive speech therapy was shown to be an efficient and possible alternative in these cases, and could also be a strategy at the beginning of conventional speech intervention.

  11. Spontaneous oscillations in microfluidic networks

    Science.gov (United States)

    Case, Daniel; Angilella, Jean-Regis; Motter, Adilson

    2017-11-01

    Precisely controlling flows within microfluidic systems is often difficult which typically results in systems being heavily reliant on numerous external pumps and computers. Here, I present a simple microfluidic network that exhibits flow rate switching, bistablity, and spontaneous oscillations controlled by a single pressure. That is, by solely changing the driving pressure, it is possible to switch between an oscillating and steady flow state. Such functionality does not rely on external hardware and may even serve as an on-chip memory or timing mechanism. I use an analytic model and rigorous fluid dynamics simulations to show these results.

  12. General features of spontaneous baryogenesis

    Science.gov (United States)

    Arbuzova, Elena

    2017-04-01

    The classical version of spontaneous baryogenesis is studied in details. It is shown that the relation between the time derivative of the (pseudo)goldstone field and the baryonic chemical potential essentially depends upon the representation chosen for the fermionic fields with non-zero baryonic number (quarks). The kinetic equation, used for the calculations of the cosmological baryon asymmetry, is generalized to the case of non-stationary background. The effects of the finite interval of the integration over time are also included into consideration.

  13. Spontaneous osteonecrosis of the knee

    Energy Technology Data Exchange (ETDEWEB)

    Kattapuram, Taj M. [Department of Radiology, Massachusetts General Hospital (United States); Kattapuram, Susan V. [Department of Radiology, Massachusetts General Hospital (United States)], E-mail: skattapuram@partners.org

    2008-07-15

    Spontaneous osteonecrosis of the knee presents with acute onset of severe, pain in elderly patients, usually female and usually without a history of trauma. Originally described as idiopathic osteonecrosis, the exact etiology is still debated. Evidence suggests that an acute fracture occurs as a result of chronic stress or minor trauma to a weakened subchondral bone plate. The imaging characteristics on MR reflect the age of the lesion and the symptoms. More appropriate terminology may be ' subchondral insufficiency fracture of the knee' or 'focal subchondral osteonecrosis'.

  14. Speech Communication and Liberal Education.

    Science.gov (United States)

    Bradley, Bert E.

    1979-01-01

    Argues for the continuation of liberal education over career-oriented programs. Defines liberal education as one that develops abilities that transcend occupational concerns, and that enables individuals to cope with shifts in values, vocations, careers, and the environment. Argues that speech communication makes a significant contribution to…

  15. "Free Speech" and "Political Correctness"

    Science.gov (United States)

    Scott, Peter

    2016-01-01

    "Free speech" and "political correctness" are best seen not as opposing principles, but as part of a spectrum. Rather than attempting to establish some absolute principles, this essay identifies four trends that impact on this debate: (1) there are, and always have been, legitimate debates about the--absolute--beneficence of…

  16. Neuronal basis of speech comprehension.

    Science.gov (United States)

    Specht, Karsten

    2014-01-01

    Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Paraconsistent semantics of speech acts

    NARCIS (Netherlands)

    Dunin-Kȩplicz, Barbara; Strachocka, Alina; Szałas, Andrzej; Verbrugge, Rineke

    2015-01-01

    This paper discusses an implementation of four speech acts: assert, concede, request and challenge in a paraconsistent framework. A natural four-valued model of interaction yields multiple new cognitive situations. They are analyzed in the context of communicative relations, which partially replace

  18. Speech recognition implementation in radiology

    International Nuclear Information System (INIS)

    White, Keith S.

    2005-01-01

    Continuous speech recognition (SR) is an emerging technology that allows direct digital transcription of dictated radiology reports. The SR systems are being widely deployed in the radiology community. This is a review of technical and practical issues that should be considered when implementing an SR system. (orig.)

  19. Fast Monaural Separation of Speech

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Dyrholm, Mads

    2003-01-01

    a Factorial Hidden Markov Model, with non-stationary assumptions on the source autocorrelations modelled through the Factorial Hidden Markov Model, leads to separation in the monaural case. By extending Hansens work we find that Roweis' assumptions are necessary for monaural speech separation. Furthermore we...

  20. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    automatic recognition of speech (ASR). Instead, likely for historical reasons, envelopes of power spectrum were adopted as main carrier of linguistic information in ASR. However, the relationships between phonetic values of sounds and their short-term spectral envelopes are not straightforward. Consequently, this asks for ...

  1. Gaucho Gazette: Speech and Sensationalism

    OpenAIRE

    Roberto José Ramos

    2013-01-01

    The Gaucho Gazette presents itself as a “popular newspaper”. Attempts to produce a denial about his aesthetic tabloid. Search only say that discloses what happens, as if the media were merely a reflection of society. This paper will seek to understand and explain your Sensationalism, through their speeches. Use for both, semiology, Roland Barthes, in their possibilities transdisciplinary.

  2. Gaucho Gazette: Speech and Sensationalism

    Directory of Open Access Journals (Sweden)

    Roberto José Ramos

    2013-07-01

    Full Text Available The Gaucho Gazette presents itself as a “popular newspaper”. Attempts to produce a denial about his aesthetic tabloid. Search only say that discloses what happens, as if the media were merely a reflection of society. This paper will seek to understand and explain your Sensationalism, through their speeches. Use for both, semiology, Roland Barthes, in their possibilities transdisciplinary.

  3. Acoustic Analysis of PD Speech

    Directory of Open Access Journals (Sweden)

    Karen Chenausky

    2011-01-01

    Full Text Available According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD, with roughly another 50,000 receiving new diagnoses each year. 70%–90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS substantially relieves motor symptoms in advanced-stage patients for whom medication produces disabling dyskinesias. This study investigated speech changes as a result of DBS settings chosen to maximize motor performance. The speech of 10 PD patients and 12 normal controls was analyzed for syllable rate and variability, syllable length patterning, vowel fraction, voice-onset time variability, and spirantization. These were normalized by the controls' standard deviation to represent distance from normal and combined into a composite measure. Results show that DBS settings relieving motor symptoms can improve speech, making it up to three standard deviations closer to normal. However, the clinically motivated settings evaluated here show greater capacity to impair, rather than improve, speech. A feedback device developed from these findings could be useful to clinicians adjusting DBS parameters, as a means for ensuring they do not unwittingly choose DBS settings which impair patients' communication.

  4. Speech perception of noise with binary gains

    DEFF Research Database (Denmark)

    Wang, DeLiang; Kjems, Ulrik; Pedersen, Michael Syskind

    2008-01-01

    For a given mixture of speech and noise, an ideal binary time-frequency mask is constructed by comparing speech energy and noise energy within local time-frequency units. It is observed that listeners achieve nearly perfect speech recognition from gated noise with binary gains prescribed...... by the ideal binary mask. Only 16 filter channels and a frame rate of 100 Hz are sufficient for high intelligibility. The results show that, despite a dramatic reduction of speech information, a pattern of binary gains provides an adequate basis for speech perception....

  5. Phonological Memory, Attention Control, and Musical Ability: Effects of Individual Differences on Rater Judgments of Second Language Speech

    Science.gov (United States)

    Isaacs, Talia; Trofimovich, Pavel

    2011-01-01

    This study examines how listener judgments of second language speech relate to individual differences in listeners' phonological memory, attention control, and musical ability. Sixty native English listeners (30 music majors, 30 nonmusic majors) rated 40 nonnative speech samples for accentedness, comprehensibility, and fluency. The listeners were…

  6. Differential effects of speech situations on mothers' and fathers' infant-directed and dog-directed speech: An acoustic analysis.

    Science.gov (United States)

    Gergely, Anna; Faragó, Tamás; Galambos, Ágoston; Topál, József

    2017-10-23

    There is growing evidence that dog-directed and infant-directed speech have similar acoustic characteristics, like high overall pitch, wide pitch range, and attention-getting devices. However, it is still unclear whether dog- and infant-directed speech have gender or context-dependent acoustic features. In the present study, we collected comparable infant-, dog-, and adult directed speech samples (IDS, DDS, and ADS) in four different speech situations (Storytelling, Task solving, Teaching, and Fixed sentences situations); we obtained the samples from parents whose infants were younger than 30 months of age and also had pet dog at home. We found that ADS was different from IDS and DDS, independently of the speakers' gender and the given situation. Higher overall pitch in DDS than in IDS during free situations was also found. Our results show that both parents hyperarticulate their vowels when talking to children but not when addressing dogs: this result is consistent with the goal of hyperspeech in language tutoring. Mothers, however, exaggerate their vowels for their infants under 18 months more than fathers do. Our findings suggest that IDS and DDS have context-dependent features and support the notion that people adapt their prosodic features to the acoustic preferences and emotional needs of their audience.

  7. Does brain injury impair speech and gesture differently?

    Directory of Open Access Journals (Sweden)

    Tilbe Göksun

    2016-09-01

    Full Text Available People often use spontaneous gestures when talking about space, such as when giving directions. In a recent study from our lab, we examined whether focal brain-injured individuals’ naming motion event components of manner and path (represented in English by verbs and prepositions, respectively are impaired selectively, and whether gestures compensate for impairment in speech. Left or right hemisphere damaged patients and elderly control participants were asked to describe motion events (e.g., walking around depicted in brief videos. Results suggest that producing verbs and prepositions can be separately impaired in the left hemisphere and gesture production compensates for naming impairments when damage involves specific areas in the left temporal cortex.

  8. QUANTIFIERS UNDONE: REVERSING PREDICTABLE SPEECH ERRORS IN COMPREHENSION.

    Science.gov (United States)

    Frazier, Lyn; Clifton, Charles

    2011-03-01

    Speakers predictably make errors during spontaneous speech. Listeners may identify such errors and repair the input, or their analysis of the input, accordingly. Two written questionnaire studies investigated error compensation mechanisms in sentences with doubled quantifiers such as Many students often turn in their assignments late . Results show a considerable number of undoubled interpretations for all items tested (though fewer for sentences containing doubled negation than for sentences containing many-often, every-always or few-seldom.) This evidence shows that the compositional form-meaning pairing supplied by the grammar is not the only systematic mapping between form and meaning. Implicit knowledge of the workings of the performance systems provides an additional mechanism for pairing sentence form and meaning. Alternate accounts of the data based on either a concord interpretation or an emphatic interpretation of the doubled quantifier don't explain why listeners fail to apprehend the 'extra meaning' added by the potentially redundant material only in limited circumstances.

  9. Presence of Bacteria in Spontaneous Achilles Tendon Ruptures.

    Science.gov (United States)

    Rolf, Christer G; Fu, Sai-Chuen; Hopkins, Chelsea; Luan, Ju; Ip, Margaret; Yung, Shu-Hang; Friman, Göran; Qin, Ling; Chan, Kai-Ming

    2017-07-01

    The structural pathology of Achilles tendon (AT) ruptures resembles tendinopathy, but the causes remain unknown. Recently, a number of diseases were found to be attributed to bacterial infections, resulting in low-grade inflammation and progressive matrix disturbance. The authors speculate that spontaneous AT ruptures may also be influenced by the presence of bacteria. Bacteria are present in ruptured ATs but not in healthy tendons. Cross-sectional study; Level of evidence, 3. Patients with spontaneous AT ruptures and patients undergoing anterior cruciate ligament (ACL) reconstruction were recruited for this study. During AT surgical repair, excised tendinopathic tissue was collected, and healthy tendon samples were obtained as controls from hamstring tendon grafts used in ACL reconstruction. Half of every sample was reserved for DNA extraction and the other half for histology. Polymerase chain reaction (PCR) was conducted using 16S rRNA gene universal primers, and the PCR products were sequenced for the identification of bacterial species. A histological examination was performed to compare tendinopathic changes in the case and control samples. Five of 20 AT rupture samples were positive for the presence of bacterial DNA, while none of the 23 hamstring tendon samples were positive. Sterile operating and experimental conditions and tests on samples, controlling for harvesting and processing procedures, ruled out the chance of postoperative bacterial contamination. The species identified predominantly belonged to the Staphylococcus genus. AT rupture samples exhibited histopathological features characteristic of tendinopathy, and most healthy hamstring tendon samples displayed normal tendon features. There were no apparent differences in histopathology between the bacterial DNA-positive and bacterial DNA-negative AT rupture samples. The authors have demonstrated the presence of bacterial DNA in ruptured AT samples. It may suggest the potential involvement of bacteria

  10. Optimal Wavelets for Speech Signal Representations

    Directory of Open Access Journals (Sweden)

    Shonda L. Walker

    2003-08-01

    Full Text Available It is well known that in many speech processing applications, speech signals are characterized by their voiced and unvoiced components. Voiced speech components contain dense frequency spectrum with many harmonics. The periodic or semi-periodic nature of voiced signals lends itself to Fourier Processing. Unvoiced speech contains many high frequency components and thus resembles random noise. Several methods for voiced and unvoiced speech representations that utilize wavelet processing have been developed. These methods seek to improve the accuracy of wavelet-based speech signal representations using adaptive wavelet techniques, superwavelets, which uses a linear combination of adaptive wavelets, gaussian methods and a multi-resolution sinusoidal transform approach to mention a few. This paper addresses the relative performance of these wavelet methods and evaluates the usefulness of wavelet processing in speech signal representations. In addition, this paper will also address some of the hardware considerations for the wavelet methods presented.

  11. Compressed Sensing Adaptive Speech Characteristics Research

    Directory of Open Access Journals (Sweden)

    Long Tao

    2014-09-01

    Full Text Available The sparsity of the speech signals is utilized in the DCT domain. According to the characteristics of the voice which may be separated into voiceless and voiced one, an adaptive measurement speech recovery method is proposed in this paper based on compressed sensing. First, the observed points are distributed based on the voicing energy ratio which the entire speech segment occupies. Then the speech segment is enflamed, if the frame is an unvoiced speech, the numbers of measurement can be allocated according to its zeros and energy rate. If the frame is voiced speech, the numbers of measurement can be allocated according to its energy. The experiment results shows that the performance of speech signal based on the method above is superior to utilize compress sensing directly.

  12. Radiological evaluation of spontaneous pneumoperitoneum

    International Nuclear Information System (INIS)

    Kim, H. S.; Kim, J. D.; Rhee, H. S.

    1982-01-01

    112 cases of spontaneous penumoperitoneum, the causes of which were confirmed by clinical and surgical procedure at Presbyterian Medical Center from January, 1977 to July, 1981 were reviewed radiologically. The results were as follows: 1. Perforation of duodenal ulcer (46/112: 41.1%), stomach ulcer (22/112: 19.6%), and stomach cancer (11/112: 9.8%) were the three most common causes of spontaneous penumoperitoneum. These were 70.5% of all causes. 2. The most common site of free gas was both subdiaphragmatic areas (46: 41.1%). Others were Rt. subdiaphragmatic only (31: 27.7%), both subdiaphragmatic with subhepatic (16: 14.3%), Rt. subdiaphragmatic with subhepatic (7: 6.2%), Rt. subdiaphragmatic only (5: 4.4%), diffuse in abdomen (4: 3.6%), and subhepatic only (3: 2.7%). So 92.0% (103/112) were located in RUQ. 3. The radiological shape of free gas was classified: crescent (52: 46.4%) of small amount; half-moon (21: 18.8%) of moderate amount; large or diffuse (39: 34.8%) of large amount.4. The age between 31 and 60 occupied 69.1% (77/112), and male was predominant (5.2 times). 5. The patient's position showing free air most frequently was erect

  13. Motor speech signature of behavioral variant frontotemporal dementia: Refining the phenotype.

    Science.gov (United States)

    Vogel, Adam P; Poole, Matthew L; Pemberton, Hugh; Caverlé, Marja W J; Boonstra, Frederique M C; Low, Essie; Darby, David; Brodtmann, Amy

    2017-08-22

    To provide a comprehensive description of motor speech function in behavioral variant frontotemporal dementia (bvFTD). Forty-eight individuals (24 bvFTD and 24 age- and sex-matched healthy controls) provided speech samples. These varied in complexity and thus cognitive demand. Their language was assessed using the Progressive Aphasia Language Scale and verbal fluency tasks. Speech was analyzed perceptually to describe the nature of deficits and acoustically to quantify differences between patients with bvFTD and healthy controls. Cortical thickness and subcortical volume derived from MRI scans were correlated with speech outcomes in patients with bvFTD. Speech of affected individuals was significantly different from that of healthy controls. The speech signature of patients with bvFTD is characterized by a reduced rate (75%) and accuracy (65%) on alternating syllable production tasks, and prosodic deficits including reduced speech rate (45%), prolonged intervals (54%), and use of short phrases (41%). Groups differed on acoustic measures derived from the reading, unprepared monologue, and diadochokinetic tasks but not the days of the week or sustained vowel tasks. Variability of silence length was associated with cortical thickness of the inferior frontal gyrus and insula and speech rate with the precentral gyrus. One in 8 patients presented with moderate speech timing deficits with a further two-thirds rated as mild or subclinical. Subtle but measurable deficits in prosody are common in bvFTD and should be considered during disease management. Language function correlated with speech timing measures derived from the unprepared monologue only. © 2017 American Academy of Neurology.

  14. Speech Enhancement with Natural Sounding Residual Noise Based on Connected Time-Frequency Speech Presence Regions

    Directory of Open Access Journals (Sweden)

    Sørensen Karsten Vandborg

    2005-01-01

    Full Text Available We propose time-frequency domain methods for noise estimation and speech enhancement. A speech presence detection method is used to find connected time-frequency regions of speech presence. These regions are used by a noise estimation method and both the speech presence decisions and the noise estimate are used in the speech enhancement method. Different attenuation rules are applied to regions with and without speech presence to achieve enhanced speech with natural sounding attenuated background noise. The proposed speech enhancement method has a computational complexity, which makes it feasible for application in hearing aids. An informal listening test shows that the proposed speech enhancement method has significantly higher mean opinion scores than minimum mean-square error log-spectral amplitude (MMSE-LSA and decision-directed MMSE-LSA.

  15. Perception of speech sounds in school-age children with speech sound disorders

    Science.gov (United States)

    Preston, Jonathan L.; Irwin, Julia R.; Turcios, Jacqueline

    2015-01-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System (Rvachew, 1994), which has been effectively used to assess preschoolers’ ability to perform goodness judgments, is explored for school-age children with residual speech errors (RSE). However, data suggest that this particular task may not be sensitive to perceptual differences in school-age children. The need for the development of clinical tools for assessment of speech perception in school-age children with RSE is highlighted, and clinical suggestions are provided. PMID:26458198

  16. Perception of Speech Sounds in School-Aged Children with Speech Sound Disorders.

    Science.gov (United States)

    Preston, Jonathan L; Irwin, Julia R; Turcios, Jacqueline

    2015-11-01

    Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System, which has been effectively used to assess preschoolers' ability to perform goodness judgments, is explored for school-aged children with residual speech errors (RSEs). However, data suggest that this particular task may not be sensitive to perceptual differences in school-aged children. The need for the development of clinical tools for assessment of speech perception in school-aged children with RSE is highlighted, and clinical suggestions are provided. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... this issue, Tuomainen et al. (2005) used sine-wave speech stimuli created from three time-varying sine waves tracking the formants of a natural speech signal. Naïve observers tend not to recognize sine wave speech as speech but become able to decode its phonetic content when informed of the speech......-like nature of the signal. The sine-wave speech was dubbed onto congruent and incongruent video of a talking face. Tuomainen et al. found that the McGurk effect did not occur for naïve observers, but did occur when observers were informed. This indicates that the McGurk illusion is due to a mechanism...

  18. A Case of Multiple Spontaneous Keloid Scars

    Directory of Open Access Journals (Sweden)

    Abdulhadi Jfri

    2015-07-01

    Full Text Available Keloid scars result from an abnormal healing response to cutaneous injury or inflammation that extends beyond the borders of the original wound. Spontaneous keloid scars forming in the absence of any previous trauma or surgical procedure are rare. Certain syndromes have been associated with this phenomenon, and few reports have discussed the evidence of single spontaneous keloid scar, which raises the question whether they are really spontaneous. Here, we present a 27-year-old mentally retarded single female with orbital hypertelorism, broad nasal bridge, repaired cleft lip and high-arched palate who presented with progressive multiple spontaneous keloid scars in different parts of her body which were confirmed histologically by the presence of typical keloidal collagen. This report supports the fact that keloid scars can appear spontaneously and are possibly linked to a genetic factor. Furthermore, it describes a new presentation of spontaneous keloid scars in the form of multiple large lesions in different sites of the body.

  19. Spontaneity of communication in individuals with autism.

    Science.gov (United States)

    Chiang, Hsu-Min; Carter, Mark

    2008-04-01

    This article provides an examination of issues related to spontaneity of communication in children with autism. Deficits relating to spontaneity or initiation are frequently reported in individuals with autism, particularly in relation to communication and social behavior. Nevertheless, spontaneity is not necessarily clearly conceptualized or measured. Several approaches to conceptualization of communicative spontaneity are examined with a particular focus on the continuum model and how it might be practically applied. A range of possible explanations for deficits in spontaneity of communication in children with autism is subsequently explored, including external factors (highly structured teaching programs, failure to systematically instruct for spontaneity) and intrinsic characteristics (intellectual disability, stimulus overselectivity, weak central coherence). Possible implications for future research are presented.

  20. Adaptive redundant speech transmission over wireless multimedia sensor networks based on estimation of perceived speech quality.

    Science.gov (United States)

    Kang, Jin Ah; Kim, Hong Kook

    2011-01-01

    An adaptive redundant speech transmission (ARST) approach to improve the perceived speech quality (PSQ) of speech streaming applications over wireless multimedia sensor networks (WMSNs) is proposed in this paper. The proposed approach estimates the PSQ as well as the packet loss rate (PLR) from the received speech data. Subsequently, it decides whether the transmission of redundant speech data (RSD) is required in order to assist a speech decoder to reconstruct lost speech signals for high PLRs. According to the decision, the proposed ARST approach controls the RSD transmission, then it optimizes the bitrate of speech coding to encode the current speech data (CSD) and RSD bitstream in order to maintain the speech quality under packet loss conditions. The effectiveness of the proposed ARST approach is then demonstrated using the adaptive multirate-narrowband (AMR-NB) speech codec and ITU-T Recommendation P.563 as a scalable speech codec and the PSQ estimation, respectively. It is shown from the experiments that a speech streaming application employing the proposed ARST approach significantly improves speech quality under packet loss conditions in WMSNs.