WorldWideScience

Sample records for non-speech mouth movements

  1. Tolerance for audiovisual asynchrony is enhanced by the spectrotemporal fidelity of the speaker's mouth movements and speech.

    Science.gov (United States)

    Shahin, Antoine J; Shen, Stanley; Kerlin, Jess R

    2017-01-01

    We examined the relationship between tolerance for audiovisual onset asynchrony (AVOA) and the spectrotemporal fidelity of the spoken words and the speaker's mouth movements. In two experiments that only varied in the temporal order of sensory modality, visual speech leading (exp1) or lagging (exp2) acoustic speech, participants watched intact and blurred videos of a speaker uttering trisyllabic words and nonwords that were noise vocoded with 4-, 8-, 16-, and 32-channels. They judged whether the speaker's mouth movements and the speech sounds were in-sync or out-of-sync . Individuals perceived synchrony (tolerated AVOA) on more trials when the acoustic speech was more speech-like (8 channels and higher vs. 4 channels), and when visual speech was intact than blurred (exp1 only). These findings suggest that enhanced spectrotemporal fidelity of the audiovisual (AV) signal prompts the brain to widen the window of integration promoting the fusion of temporally distant AV percepts.

  2. The Influence of Visual and Auditory Information on the Perception of Speech and Non-Speech Oral Movements in Patients with Left Hemisphere Lesions

    Science.gov (United States)

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-01-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…

  3. Mouth reversal extinguishes mismatch negativity induced by the McGurk illusion

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    2013-01-01

    The sight of articulatory mouth movements (visual speech) influences auditory speech perception. This is demonstrated by the McGurk illusion in which incongruent visual speech alters the auditory phonetic percept. In behavioral studies, reversal of the vertical mouth direction has been reported...... by visual speech with either upright (unaltered) or vertically reversed mouth area. In a preliminary analysis, we found a Mismatch Negativity component induced by the McGurk illusion for 6 of 17 participants at electrode Cz when the mouth area was upright. In comparison, these participants produced...

  4. Development of prenatal lateralization: evidence from fetal mouth movements.

    Science.gov (United States)

    Reissland, N; Francis, B; Aydin, E; Mason, J; Exley, K

    2014-05-28

    Human lateralized behaviors relate to the asymmetric development of the brain. Research of the prenatal origins of laterality is equivocal with some studies suggesting that fetuses exhibit lateralized behavior and other not finding such laterality. Given that by around 22weeks of gestation the left cerebral hemisphere compared to the right is significantly larger in both male and female fetuses we expected that the right side of the fetal face would show more movement with increased gestation. This longitudinal study investigated whether fetuses from 24 to 36weeks of gestation showed increasing lateralized behaviors during mouth opening and whether lateralized mouth movements are related to fetal age, gender and maternal self-reported prenatal stress. Following ethical approval, fifteen healthy fetuses (8 girls) of primagravid mothers were scanned four times from 24 to 36-gestation. Two types of mouth opening movements - upper lip raiser and mouth stretch - were coded in 60 scans for 10min. We modeled the proportion of right mouth opening for each fetal scan using a generalized linear mixed model, which takes account of the repeated measures design. There was a significant increase in the proportion of lateralized mouth openings over the period increasing by 11% for each week of gestational age (LRT change in deviance=10.92, 1df; pgender differences were found nor was there any effect of maternally reported stress on fetal lateralized mouth movements. There was also evidence of left lateralization preference in mouth movement, although no evidence of changes in lateralization bias over time. This longitudinal study provides important new insights into the development of lateralized mouth movements from 24 to 36 weeks of gestation. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Perfil da fala do respirador oral Speech profile of the mouth breather

    Directory of Open Access Journals (Sweden)

    Cintia Megumi Nishimura

    2010-06-01

    Full Text Available TEMA: alteração de fala em respiradores orais. OBJETIVO: o presente estudo investigou através de levantamento bibliográfico dos últimos dez anos o perfil de fala em respiradores orais. CONCLUSÃO: constata-se a necessidade em realizar estudos mais profundos sobre este assunto para identificar as características da fala dos respiradores orais. Tais informações são muito úteis para o fonoaudiólogo, tanto para a realização de uma boa avaliação como no melhor atendimento destes indivíduos.BACKGROUND: alteration of speech in mouth breathers. PURPOSE: this study carried out a bibliographic review over the last ten years about mouth breathers' speech profile. CONCLUSION: there is a need to carry out more thorough studies on this subject to identify the speech characteristics of mouth breathers. Such information is very useful for the speech therapist, both for making a good assessment as well as for providing the best care for these individuals.

  6. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  7. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  8. Lip Movement Exaggerations during Infant-Directed Speech

    Science.gov (United States)

    Green, Jordan R.; Nip, Ignatius S. B.; Wilson, Erin M.; Mefferd, Antje S.; Yunusova, Yana

    2010-01-01

    Purpose: Although a growing body of literature has identified the positive effects of visual speech on speech and language learning, oral movements of infant-directed speech (IDS) have rarely been studied. This investigation used 3-dimensional motion capture technology to describe how mothers modify their lip movements when talking to their…

  9. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  10. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  11. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  12. Head movements encode emotions during speech and song.

    Science.gov (United States)

    Livingstone, Steven R; Palmer, Caroline

    2016-04-01

    When speaking or singing, vocalists often move their heads in an expressive fashion, yet the influence of emotion on vocalists' head motion is unknown. Using a comparative speech/song task, we examined whether vocalists' intended emotions influence head movements and whether those movements influence the perceived emotion. In Experiment 1, vocalists were recorded with motion capture while speaking and singing each statement with different emotional intentions (very happy, happy, neutral, sad, very sad). Functional data analyses showed that head movements differed in translational and rotational displacement across emotional intentions, yet were similar across speech and song, transcending differences in F0 (varied freely in speech, fixed in song) and lexical variability. Head motion specific to emotional state occurred before and after vocalizations, as well as during sound production, confirming that some aspects of movement were not simply a by-product of sound production. In Experiment 2, observers accurately identified vocalists' intended emotion on the basis of silent, face-occluded videos of head movements during speech and song. These results provide the first evidence that head movements encode a vocalist's emotional intent and that observers decode emotional information from these movements. We discuss implications for models of head motion during vocalizations and applied outcomes in social robotics and automated emotion recognition. (c) 2016 APA, all rights reserved).

  13. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  14. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  15. Model-Based Synthesis of Visual Speech Movements from 3D Video

    Directory of Open Access Journals (Sweden)

    Edge JamesD

    2009-01-01

    Full Text Available We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this parameterisation a model of how lips move is built and is used in the animation of visual speech movements from speech audio input. The mapping from audio parameters to lip movements is disambiguated by selecting only the most similar stored phonetic units to the target utterance during synthesis. By combining properties of model-based synthesis (e.g., HMMs, neural nets with unit selection we improve the quality of our speech synthesis.

  16. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  17. Coordination of head movements and speech in first encounter dialogues

    DEFF Research Database (Denmark)

    Paggio, Patrizia

    2015-01-01

    This paper presents an analysis of the temporal alignment be- tween head movements and associated speech segments in the NOMCO corpus of first encounter dialogues [1]. Our results show that head movements tend to start slightly before the onset of the corresponding speech sequence and to end...... slightly after, but also that there are delays in both directions in the range of -/+ 1s. Various factors that may influence delay duration are investigated. Correlations are found between delay length and the duration of the speech sequences associated with the head movements. Effects due to the different...

  18. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  19. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  20. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  1. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  2. Real-time speech-driven animation of expressive talking faces

    Science.gov (United States)

    Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli

    2011-05-01

    In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.

  3. Functional magnetic resonance imaging exploration of combined hand and speech movements in Parkinson's disease.

    Science.gov (United States)

    Pinto, Serge; Mancini, Laura; Jahanshahi, Marjan; Thornton, John S; Tripoliti, Elina; Yousry, Tarek A; Limousin, Patricia

    2011-10-01

    Among the repertoire of motor functions, although hand movement and speech production tasks have been investigated widely by functional neuroimaging, paradigms combining both movements have been studied less so. Such paradigms are of particular interest in Parkinson's disease, in which patients have specific difficulties performing two movements simultaneously. In 9 unmedicated patients with Parkinson's disease and 15 healthy control subjects, externally cued tasks (i.e., hand movement, speech production, and combined hand movement and speech production) were performed twice in a random order and functional magnetic resonance imaging detected cerebral activations, compared to the rest. F-statistics tested within-group (significant activations at P values 10 voxels). For control subjects, the combined task activations comprised the sum of those obtained during hand movement and speech production performed separately, reflecting the neural correlates of performing movements sharing similar programming modalities. In patients with Parkinson's disease, only activations underlying hand movement were observed during the combined task. We interpreted this phenomenon as patients' potential inability to recruit facilitatory activations while performing two movements simultaneously. This lost capacity could be related to a functional prioritization of one movement (i.e., hand movement), in comparison with the other (i.e., speech production). Our observation could also reflect the inability of patients with Parkinson's disease to intrinsically engage the motor coordination necessary to perform a combined task. Copyright © 2011 Movement Disorder Society.

  4. Stability and composition of functional synergies for speech movements in children with developmental speech disorders

    NARCIS (Netherlands)

    Terband, H.; Maassen, B.; van Lieshout, P.; Nijland, L.

    2011-01-01

    The aim of this study was to investigate the consistency and composition of functional synergies for speech movements in children with developmental speech disorders. Kinematic data were collected on the reiterated productions of syllables spa (/spa:/) and paas (/pa:s/) by 10 6- to 9-year-olds with

  5. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  6. Stability and Composition of Functional Synergies for Speech Movements in Children with Developmental Speech Disorders

    Science.gov (United States)

    Terband, H.; Maassen, B.; van Lieshout, P.; Nijland, L.

    2011-01-01

    The aim of this study was to investigate the consistency and composition of functional synergies for speech movements in children with developmental speech disorders. Kinematic data were collected on the reiterated productions of syllables spa(/spa[image omitted]/) and paas(/pa[image omitted]s/) by 10 6- to 9-year-olds with developmental speech…

  7. Temporal predictive mechanisms modulate motor reaction time during initiation and inhibition of speech and hand movement.

    Science.gov (United States)

    Johari, Karim; Behroozmand, Roozbeh

    2017-08-01

    Skilled movement is mediated by motor commands executed with extremely fine temporal precision. The question of how the brain incorporates temporal information to perform motor actions has remained unanswered. This study investigated the effect of stimulus temporal predictability on response timing of speech and hand movement. Subjects performed a randomized vowel vocalization or button press task in two counterbalanced blocks in response to temporally-predictable and unpredictable visual cues. Results indicated that speech and hand reaction time was decreased for predictable compared with unpredictable stimuli. This finding suggests that a temporal predictive code is established to capture temporal dynamics of sensory cues in order to produce faster movements in responses to predictable stimuli. In addition, results revealed a main effect of modality, indicating faster hand movement compared with speech. We suggest that this effect is accounted for by the inherent complexity of speech production compared with hand movement. Lastly, we found that movement inhibition was faster than initiation for both hand and speech, suggesting that movement initiation requires a longer processing time to coordinate activities across multiple regions in the brain. These findings provide new insights into the mechanisms of temporal information processing during initiation and inhibition of speech and hand movement. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Sentence-Level Movements in Parkinson's Disease: Loud, Clear, and Slow Speech

    Science.gov (United States)

    Kearney,Elaine; Giles, Renuka; Haworth, Brandon; Faloutsos, Petros; Baljko, Melanie; Yunusova, Yana

    2017-01-01

    Purpose: To further understand the effect of Parkinson's disease (PD) on articulatory movements in speech and to expand our knowledge of therapeutic treatment strategies, this study examined movements of the jaw, tongue blade, and tongue dorsum during sentence production with respect to speech intelligibility and compared the effect of varying…

  9. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  10. Measurements on the movement of the lower jaw in speech

    NARCIS (Netherlands)

    Nooteboom, S.G.; Slis, I.H.

    1970-01-01

    This report concerns some preliminary measurements on the movement of the lower jaw in speech. Such measurements may be interesting for several reasons. One of these is that they more easily than measurements on the movements of other articulators may give some insight into the effect of stress,

  11. Why movement is captured by music, but less by speech: role of temporal regularity.

    Science.gov (United States)

    Dalla Bella, Simone; Białuńska, Anita; Sowiński, Jakub

    2013-01-01

    Music has a pervasive tendency to rhythmically engage our body. In contrast, synchronization with speech is rare. Music's superiority over speech in driving movement probably results from isochrony of musical beats, as opposed to irregular speech stresses. Moreover, the presence of regular patterns of embedded periodicities (i.e., meter) may be critical in making music particularly conducive to movement. We investigated these possibilities by asking participants to synchronize with isochronous auditory stimuli (target), while music and speech distractors were presented at one of various phase relationships with respect to the target. In Exp. 1, familiar musical excerpts and fragments of children poetry were used as distractors. The stimuli were manipulated in terms of beat/stress isochrony and average pitch to achieve maximum comparability. In Exp. 2, the distractors were well-known songs performed with lyrics, on a reiterated syllable, and spoken lyrics, all having the same meter. Music perturbed synchronization with the target stimuli more than speech fragments. However, music superiority over speech disappeared when distractors shared isochrony and the same meter. Music's peculiar and regular temporal structure is likely to be the main factor fostering tight coupling between sound and movement.

  12. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  13. Speech production gains following constraint-induced movement therapy in children with hemiparesis.

    Science.gov (United States)

    Allison, Kristen M; Reidy, Teressa Garcia; Boyle, Mary; Naber, Erin; Carney, Joan; Pidcock, Frank S

    2017-01-01

    The purpose of this study was to investigate changes in speech skills of children who have hemiparesis and speech impairment after participation in a constraint-induced movement therapy (CIMT) program. While case studies have reported collateral speech gains following CIMT, the effect of CIMT on speech production has not previously been directly investigated to the knowledge of these investigators. Eighteen children with hemiparesis and co-occurring speech impairment participated in a 21-day clinical CIMT program. The Goldman-Fristoe Test of Articulation-2 (GFTA-2) was used to assess children's articulation of speech sounds before and after the intervention. Changes in percent of consonants correct (PCC) on the GFTA-2 were used as a measure of change in speech production. Children made significant gains in PCC following CIMT. Gains were similar in children with left and right-sided hemiparesis, and across age groups. This study reports significant collateral gains in speech production following CIMT and suggests benefits of CIMT may also spread to speech motor domains.

  14. Bilingualism modulates infants' selective attention to the mouth of a talking face.

    Science.gov (United States)

    Pons, Ferran; Bosch, Laura; Lewkowicz, David J

    2015-04-01

    Infants growing up in bilingual environments succeed at learning two languages. What adaptive processes enable them to master the more complex nature of bilingual input? One possibility is that bilingual infants take greater advantage of the redundancy of the audiovisual speech that they usually experience during social interactions. Thus, we investigated whether bilingual infants' need to keep languages apart increases their attention to the mouth as a source of redundant and reliable speech cues. We measured selective attention to talking faces in 4-, 8-, and 12-month-old Catalan and Spanish monolingual and bilingual infants. Monolinguals looked more at the eyes than the mouth at 4 months and more at the mouth than the eyes at 8 months in response to both native and nonnative speech, but they looked more at the mouth than the eyes at 12 months only in response to nonnative speech. In contrast, bilinguals looked equally at the eyes and mouth at 4 months, more at the mouth than the eyes at 8 months, and more at the mouth than the eyes at 12 months, and these patterns of responses were found for both native and nonnative speech at all ages. Thus, to support their dual-language acquisition processes, bilingual infants exploit the greater perceptual salience of redundant audiovisual speech cues at an earlier age and for a longer time than monolingual infants. © The Author(s) 2015.

  15. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2017-02-01

    Full Text Available Audiovisual speech integration combines information from auditory speech (talker's voice and visual speech (talker's mouth movements to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga, that are integrated to produce a fused percept ("da". This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba. We describe a simplified model of causal inference in multisensory speech perception (CIMS that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.

  16. Speech misperception: speaking and seeing interfere differently with hearing.

    Directory of Open Access Journals (Sweden)

    Takemi Mochida

    Full Text Available Speech perception is thought to be linked to speech motor production. This linkage is considered to mediate multimodal aspects of speech perception, such as audio-visual and audio-tactile integration. However, direct coupling between articulatory movement and auditory perception has been little studied. The present study reveals a clear dissociation between the effects of a listener's own speech action and the effects of viewing another's speech movements on the perception of auditory phonemes. We assessed the intelligibility of the syllables [pa], [ta], and [ka] when listeners silently and simultaneously articulated syllables that were congruent/incongruent with the syllables they heard. The intelligibility was compared with a condition where the listeners simultaneously watched another's mouth producing congruent/incongruent syllables, but did not articulate. The intelligibility of [ta] and [ka] were degraded by articulating [ka] and [ta] respectively, which are associated with the same primary articulator (tongue as the heard syllables. But they were not affected by articulating [pa], which is associated with a different primary articulator (lips from the heard syllables. In contrast, the intelligibility of [ta] and [ka] was degraded by watching the production of [pa]. These results indicate that the articulatory-induced distortion of speech perception occurs in an articulator-specific manner while visually induced distortion does not. The articulator-specific nature of the auditory-motor interaction in speech perception suggests that speech motor processing directly contributes to our ability to hear speech.

  17. Speech-like orofacial oscillations in stump-tailed macaque (Macaca arctoides) facial and vocal signals.

    Science.gov (United States)

    Toyoda, Aru; Maruhashi, Tamaki; Malaivijitnond, Suchinda; Koda, Hiroki

    2017-10-01

    Speech is unique to humans and characterized by facial actions of ∼5 Hz oscillations of lip, mouth or jaw movements. Lip-smacking, a facial display of primates characterized by oscillatory actions involving the vertical opening and closing of the jaw and lips, exhibits stable 5-Hz oscillation patterns, matching that of speech, suggesting that lip-smacking is a precursor of speech. We tested if facial or vocal actions exhibiting the same rate of oscillation are found in wide forms of facial or vocal displays in various social contexts, exhibiting diversity among species. We observed facial and vocal actions of wild stump-tailed macaques (Macaca arctoides), and selected video clips including facial displays (teeth chattering; TC), panting calls, and feeding. Ten open-to-open mouth durations during TC and feeding and five amplitude peak-to-peak durations in panting were analyzed. Facial display (TC) and vocalization (panting) oscillated within 5.74 ± 1.19 and 6.71 ± 2.91 Hz, respectively, similar to the reported lip-smacking of long-tailed macaques and the speech of humans. These results indicated a common mechanism for the central pattern generator underlying orofacial movements, which would evolve to speech. Similar oscillations in panting, which evolved from different muscular control than the orofacial action, suggested the sensory foundations for perceptual saliency particular to 5-Hz rhythms in macaques. This supports the pre-adaptation hypothesis of speech evolution, which states a central pattern generator for 5-Hz facial oscillation and perceptual background tuned to 5-Hz actions existed in common ancestors of macaques and humans, before the emergence of speech. © 2017 Wiley Periodicals, Inc.

  18. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  19. Speech-Like and Non-Speech Lip Kinematics and Coordination in Aphasia

    Science.gov (United States)

    Bose, Arpita; van Lieshout, Pascal

    2012-01-01

    Background: In addition to the well-known linguistic processing impairments in aphasia, oro-motor skills and articulatory implementation of speech segments are reported to be compromised to some degree in most types of aphasia. Aims: This study aimed to identify differences in the characteristics and coordination of lip movements in the production…

  20. Co-speech hand movements during narrations: What is the impact of right vs. left hemisphere brain damage?

    Science.gov (United States)

    Hogrefe, Katharina; Rein, Robert; Skomroch, Harald; Lausberg, Hedda

    2016-12-01

    Persons with brain damage show deviant patterns of co-speech hand movement behaviour in comparison to healthy speakers. It has been claimed by several authors that gesture and speech rely on a single production mechanism that depends on the same neurological substrate while others claim that both modalities are closely related but separate production channels. Thus, findings so far are contradictory and there is a lack of studies that systematically analyse the full range of hand movements that accompany speech in the condition of brain damage. In the present study, we aimed to fill this gap by comparing hand movement behaviour in persons with unilateral brain damage to the left and the right hemisphere and a matched control group of healthy persons. For hand movement coding, we applied Module I of NEUROGES, an objective and reliable analysis system that enables to analyse the full repertoire of hand movements independent of speech, which makes it specifically suited for the examination of persons with aphasia. The main results of our study show a decreased use of communicative conceptual gestures in persons with damage to the right hemisphere and an increased use of these gestures in persons with left brain damage and aphasia. These results not only suggest that the production of gesture and speech do not rely on the same neurological substrate but also underline the important role of right hemisphere functioning for gesture production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. [Clinical characteristics and speech therapy of lingua-apical articulation disorder].

    Science.gov (United States)

    Zhang, Feng-hua; Jin, Xing-ming; Zhang, Yi-wen; Wu, Hong; Jiang, Fan; Shen, Xiao-ming

    2006-03-01

    To explore the clinical characteristics and speech therapy of 62 children with lingua-apical articulation disorder. Peabody Picture Vocabulary Test (PPVT), Gesell development scales (Gesell), Wechsler Intelligence Scale for Preschool Children (WPPSI) and speech test were performed for 62 children at the ages of 3 to 8 years with lingua-apical articulation disorder. PPVT was used to measure receptive vocabulary skills. GESELL and WPPSI were utilized to represent cognitive and non-verbal ability. The speech test was adopted to assess the speech development. The children received speech therapy and auxiliary oral-motor functional training once or twice a week. Firstly the target sound was identified according to the speech development milestone, then the method of speech localization was used to clarify the correct articulation placement and manner. It was needed to change food character and administer oral-motor functional training for children with oral motor dysfunction. The 62 cases with the apical articulation disorder were classified into four groups. The combined pattern of the articulation disorder was the most common (40 cases, 64.5%), the next was apico-dental disorder (15 cases, 24.2%). The third was palatal disorder (4 cases, 6.5%) and the last one was the linguo-alveolar disorder (3 cases, 4.8%). The substitution errors of velar were the most common (95.2%), the next was omission errors (30.6%) and the last was absence of aspiration (12.9%). Oral motor dysfunction was found in some children with problems such as disordered joint movement of tongue and head, unstable jaw, weak tongue strength and poor coordination of tongue movement. Some children had feeding problems such as preference of eating soft food, keeping food in mouths, eating slowly, and poor chewing. After 5 to 18 times of therapy, the effective rate of speech therapy reached 82.3%. The lingua-apical articulation disorders can be classified into four groups. The combined pattern of the

  2. Quantitative assessment of motor speech abnormalities in idiopathic rapid eye movement sleep behaviour disorder.

    Science.gov (United States)

    Rusz, Jan; Hlavnička, Jan; Tykalová, Tereza; Bušková, Jitka; Ulmanová, Olga; Růžička, Evžen; Šonka, Karel

    2016-03-01

    Patients with idiopathic rapid eye movement sleep behaviour disorder (RBD) are at substantial risk for developing Parkinson's disease (PD) or related neurodegenerative disorders. Speech is an important indicator of motor function and movement coordination, and therefore may be an extremely sensitive early marker of changes due to prodromal neurodegeneration. Speech data were acquired from 16 RBD subjects and 16 age- and sex-matched healthy control subjects. Objective acoustic assessment of 15 speech dimensions representing various phonatory, articulatory, and prosodic deviations was performed. Statistical models were applied to characterise speech disorders in RBD and to estimate sensitivity and specificity in differentiating between RBD and control subjects. Some form of speech impairment was revealed in 88% of RBD subjects. Articulatory deficits were the most prominent findings in RBD. In comparison to controls, the RBD group showed significant alterations in irregular alternating motion rates (p = 0.009) and articulatory decay (p = 0.01). The combination of four distinctive speech dimensions, including aperiodicity, irregular alternating motion rates, articulatory decay, and dysfluency, led to 96% sensitivity and 79% specificity in discriminating between RBD and control subjects. Speech impairment was significantly more pronounced in RBD subjects with the motor score of the Unified Parkinson's Disease Rating Scale greater than 4 points when compared to other RBD individuals. Simple quantitative speech motor measures may be suitable for the reliable detection of prodromal neurodegeneration in subjects with RBD, and therefore may provide important outcomes for future therapy trials. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  4. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  5. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  6. Levodopa effects on hand and speech movements in patients with Parkinson's disease: a FMRI study.

    Directory of Open Access Journals (Sweden)

    Audrey Maillet

    Full Text Available Levodopa (L-dopa effects on the cardinal and axial symptoms of Parkinson's disease (PD differ greatly, leading to therapeutic challenges for managing the disabilities in this patient's population. In this context, we studied the cerebral networks associated with the production of a unilateral hand movement, speech production, and a task combining both tasks in 12 individuals with PD, both off and on levodopa (L-dopa. Unilateral hand movements in the off medication state elicited brain activations in motor regions (primary motor cortex, supplementary motor area, premotor cortex, cerebellum, as well as additional areas (anterior cingulate, putamen, associative parietal areas; following L-dopa administration, the brain activation profile was globally reduced, highlighting activations in the parietal and posterior cingulate cortices. For the speech production task, brain activation patterns were similar with and without medication, including the orofacial primary motor cortex (M1, the primary somatosensory cortex and the cerebellar hemispheres bilaterally, as well as the left- premotor, anterior cingulate and supramarginal cortices. For the combined task off L-dopa, the cerebral activation profile was restricted to the right cerebellum (hand movement, reflecting the difficulty in performing two movements simultaneously in PD. Under L-dopa, the brain activation profile of the combined task involved a larger pattern, including additional fronto-parietal activations, without reaching the sum of the areas activated during the simple hand and speech tasks separately. Our results question both the role of the basal ganglia system in speech production and the modulation of task-dependent cerebral networks by dopaminergic treatment.

  7. Enhancement of Non-Stationary Speech using Harmonic Chirp Filters

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2015-01-01

    In this paper, the issue of single channel speech enhancement of non-stationary voiced speech is addressed. The non-stationarity of speech is well known, but state of the art speech enhancement methods assume stationarity within frames of 20–30 ms. We derive optimal distortionless filters that take...... the non-stationarity nature of voiced speech into account via linear constraints. This is facilitated by imposing a harmonic chirp model on the speech signal. As an implicit part of the filter design, the noise statistics are also estimated based on the observed signal and parameters of the harmonic chirp...... model. Simulations on real speech show that the chirp based filters perform better than their harmonic counterparts. Further, it is seen that the gain of using the chirp model increases when the estimated chirp parameter is big corresponding to periods in the signal where the instantaneous fundamental...

  8. The improvement of movement and speech during rapid eye movement sleep behaviour disorder in multiple system atrophy.

    Science.gov (United States)

    De Cock, Valérie Cochen; Debs, Rachel; Oudiette, Delphine; Leu, Smaranda; Radji, Fatai; Tiberge, Michel; Yu, Huan; Bayard, Sophie; Roze, Emmanuel; Vidailhet, Marie; Dauvilliers, Yves; Rascol, Olivier; Arnulf, Isabelle

    2011-03-01

    Multiple system atrophy is an atypical parkinsonism characterized by severe motor disabilities that are poorly levodopa responsive. Most patients develop rapid eye movement sleep behaviour disorder. Because parkinsonism is absent during rapid eye movement sleep behaviour disorder in patients with Parkinson's disease, we studied the movements of patients with multiple system atrophy during rapid eye movement sleep. Forty-nine non-demented patients with multiple system atrophy and 49 patients with idiopathic Parkinson's disease were interviewed along with their 98 bed partners using a structured questionnaire. They rated the quality of movements, vocal and facial expressions during rapid eye movement sleep behaviour disorder as better than, equal to or worse than the same activities in an awake state. Sleep and movements were monitored using video-polysomnography in 22/49 patients with multiple system atrophy and in 19/49 patients with Parkinson's disease. These recordings were analysed for the presence of parkinsonism and cerebellar syndrome during rapid eye movement sleep movements. Clinical rapid eye movement sleep behaviour disorder was observed in 43/49 (88%) patients with multiple system atrophy. Reports from the 31/43 bed partners who were able to evaluate movements during sleep indicate that 81% of the patients showed some form of improvement during rapid eye movement sleep behaviour disorder. These included improved movement (73% of patients: faster, 67%; stronger, 52%; and smoother, 26%), improved speech (59% of patients: louder, 55%; more intelligible, 17%; and better articulated, 36%) and normalized facial expression (50% of patients). The rate of improvement was higher in Parkinson's disease than in multiple system atrophy, but no further difference was observed between the two forms of multiple system atrophy (predominant parkinsonism versus cerebellar syndrome). Video-monitored movements during rapid eye movement sleep in patients with multiple system

  9. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  10. Movement goals and feedback and feedforward control mechanisms in speech production.

    Science.gov (United States)

    Perkell, Joseph S

    2012-09-01

    Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.

  11. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  12. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  13. Non-fluent speech following stroke is caused by impaired efference copy.

    Science.gov (United States)

    Feenaughty, Lynda; Basilakos, Alexandra; Bonilha, Leonardo; den Ouden, Dirk-Bart; Rorden, Chris; Stark, Brielle; Fridriksson, Julius

    2017-09-01

    Efference copy is a cognitive mechanism argued to be critical for initiating and monitoring speech: however, the extent to which breakdown of efference copy mechanisms impact speech production is unclear. This study examined the best mechanistic predictors of non-fluent speech among 88 stroke survivors. Objective speech fluency measures were subjected to a principal component analysis (PCA). The primary PCA factor was then entered into a multiple stepwise linear regression analysis as the dependent variable, with a set of independent mechanistic variables. Participants' ability to mimic audio-visual speech ("speech entrainment response") was the best independent predictor of non-fluent speech. We suggest that this "speech entrainment" factor reflects integrity of internal monitoring (i.e., efference copy) of speech production, which affects speech initiation and maintenance. Results support models of normal speech production and suggest that therapy focused on speech initiation and maintenance may improve speech fluency for individuals with chronic non-fluent aphasia post stroke.

  14. A Clinician Survey of Speech and Non-Speech Characteristics of Neurogenic Stuttering

    Science.gov (United States)

    Theys, Catherine; van Wieringen, Astrid; De Nil, Luc F.

    2008-01-01

    This study presents survey data on 58 Dutch-speaking patients with neurogenic stuttering following various neurological injuries. Stroke was the most prevalent cause of stuttering in our patients, followed by traumatic brain injury, neurodegenerative diseases, and other causes. Speech and non-speech characteristics were analyzed separately for…

  15. Non-right handed primary progressive apraxia of speech.

    Science.gov (United States)

    Botha, Hugo; Duffy, Joseph R; Whitwell, Jennifer L; Strand, Edythe A; Machulda, Mary M; Spychalla, Anthony J; Tosakulwong, Nirubol; Senjem, Matthew L; Knopman, David S; Petersen, Ronald C; Jack, Clifford R; Lowe, Val J; Josephs, Keith A

    2018-07-15

    In recent years a large and growing body of research has greatly advanced our understanding of primary progressive apraxia of speech. Handedness has emerged as one potential marker of selective vulnerability in degenerative diseases. This study evaluated the clinical and imaging findings in non-right handed compared to right handed participants in a prospective cohort diagnosed with primary progressive apraxia of speech. A total of 30 participants were included. Compared to the expected rate in the population, there was a higher prevalence of non-right handedness among those with primary progressive apraxia of speech (6/30, 20%). Small group numbers meant that these results did not reach statistical significance, although the effect sizes were moderate-to-large. There were no clinical differences between right handed and non-right handed participants. Bilateral hypometabolism was seen in primary progressive apraxia of speech compared to controls, with non-right handed participants showing more right hemispheric involvement. This is the first report of a higher rate of non-right handedness in participants with isolated apraxia of speech, which may point to an increased vulnerability for developing this disorder among non-right handed participants. This challenges prior hypotheses about a relative protective effect of non-right handedness for tau-related neurodegeneration. We discuss potential avenues for future research to investigate the relationship between handedness and motor disorders more generally. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Oral breathing and speech disorders in children

    Directory of Open Access Journals (Sweden)

    Silvia F. Hitos

    2013-07-01

    Conclusion: Mouth breathing can affect speech development, socialization, and school performance. Early detection of mouth breathing is essential to prevent and minimize its negative effects on the overall development of individuals.

  17. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  18. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same

  19. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results

  20. Speech recovery and language plasticity can be facilitated by Sensori-Motor Fusion training in chronic non-fluent aphasia. A case report study.

    Science.gov (United States)

    Haldin, Célise; Acher, Audrey; Kauffmann, Louise; Hueber, Thomas; Cousin, Emilie; Badin, Pierre; Perrier, Pascal; Fabre, Diandra; Perennou, Dominic; Detante, Olivier; Jaillard, Assia; Lœvenbruck, Hélène; Baciu, Monica

    2017-11-17

    The rehabilitation of speech disorders benefits from providing visual information which may improve speech motor plans in patients. We tested the proof of concept of a rehabilitation method (Sensori-Motor Fusion, SMF; Ultraspeech player) in one post-stroke patient presenting chronic non-fluent aphasia. SMF allows visualisation by the patient of target tongue and lips movements using high-speed ultrasound and video imaging. This can improve the patient's awareness of his/her own lingual and labial movements, which can, in turn, improve the representation of articulatory movements and increase the ability to coordinate and combine articulatory gestures. The auditory and oro-sensory feedback received by the patient as a result of his/her own pronunciation can be integrated with the target articulatory movements they watch. Thus, this method is founded on sensorimotor integration during speech. The SMF effect on this patient was assessed through qualitative comparison of language scores and quantitative analysis of acoustic parameters measured in a speech production task, before and after rehabilitation. We also investigated cerebral patterns of language reorganisation for rhyme detection and syllable repetition, to evaluate the influence of SMF on phonological-phonetic processes. Our results showed that SMF had a beneficial effect on this patient who qualitatively improved in naming, reading, word repetition and rhyme judgment tasks. Quantitative measurements of acoustic parameters indicate that the patient's production of vowels and syllables also improved. Compared with pre-SMF, the fMRI data in the post-SMF session revealed the activation of cerebral regions related to articulatory, auditory and somatosensory processes, which were expected to be recruited by SMF. We discuss neurocognitive and linguistic mechanisms which may explain speech improvement after SMF, as well as the advantages of using this speech rehabilitation method.

  1. The use of acoustic stimulation to inspect the fetal mouth

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Keun Young; Jun, Hyun Ah; Jang, Pong Rheem; Lee, Keung Hee [Hallym University College of Medicine, Seoul (Korea, Republic of); Nagey, David A. [The Johns Hopkins University, Baltimore (United States)

    2000-12-15

    The normal neonatal response to sound stimulus consists of a generalized paroxysmal startle reflex. We recently noted an increase in fetal movements, head turning, mouth opening, tongue protrusion, cheek motion, hand to head movement and fetal eye blinking subsequent to fetal vibroacoustic stimulation. These movements are thought to represent portions of a startle response. Evaluation of the fetal face is an essential part of routine sonographic examination and of a level II examination. The complexity of the face in combination with suboptimal positioning may make it difficult to obtain adequate images of the fetal mouth. The fetal mouth is especially difficult to examine if it remains closed. It appeared to us that approximately 50% of the time, fetuses may be seen touching their face and head with their hands. This action may make evaluation of the face more difficult because of the shadowing caused by the overlying bones of the hands. We hypothesized that if vibroacoustic stimulation brings about fetal mouth movement and opening and/or withdrawal of the fetal hand from the mouth, it may facilitate anatomic evaluation for cleft lip and palate. Sonographic examination of the fetal mouth is facilitated if the mouth is open or moving. This study was designed to determine whether acoustic stimulation of the fetus would cause it to move its mouth. 109 women with uncomplicated pregnancies between 20 and 39 weeks gestation consented.

  2. The use of acoustic stimulation to inspect the fetal mouth

    International Nuclear Information System (INIS)

    Lee, Keun Young; Jun, Hyun Ah; Jang, Pong Rheem; Lee, Keung Hee; Nagey, David A.

    2000-01-01

    The normal neonatal response to sound stimulus consists of a generalized paroxysmal startle reflex. We recently noted an increase in fetal movements, head turning, mouth opening, tongue protrusion, cheek motion, hand to head movement and fetal eye blinking subsequent to fetal vibroacoustic stimulation. These movements are thought to represent portions of a startle response. Evaluation of the fetal face is an essential part of routine sonographic examination and of a level II examination. The complexity of the face in combination with suboptimal positioning may make it difficult to obtain adequate images of the fetal mouth. The fetal mouth is especially difficult to examine if it remains closed. It appeared to us that approximately 50% of the time, fetuses may be seen touching their face and head with their hands. This action may make evaluation of the face more difficult because of the shadowing caused by the overlying bones of the hands. We hypothesized that if vibroacoustic stimulation brings about fetal mouth movement and opening and/or withdrawal of the fetal hand from the mouth, it may facilitate anatomic evaluation for cleft lip and palate. Sonographic examination of the fetal mouth is facilitated if the mouth is open or moving. This study was designed to determine whether acoustic stimulation of the fetus would cause it to move its mouth. 109 women with uncomplicated pregnancies between 20 and 39 weeks gestation consented.

  3. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    Science.gov (United States)

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  4. Speech and language adverse effects after thalamotomy and deep brain stimulation in patients with movement disorders: A meta-analysis.

    Science.gov (United States)

    Alomar, Soha; King, Nicolas K K; Tam, Joseph; Bari, Ausaf A; Hamani, Clement; Lozano, Andres M

    2017-01-01

    The thalamus has been a surgical target for the treatment of various movement disorders. Commonly used therapeutic modalities include ablative and nonablative procedures. A major clinical side effect of thalamic surgery is the appearance of speech problems. This review summarizes the data on the development of speech problems after thalamic surgery. A systematic review and meta-analysis was performed using nine databases, including Medline, Web of Science, and Cochrane Library. We also checked for articles by searching citing and cited articles. We retrieved studies between 1960 and September 2014. Of a total of 2,320 patients, 19.8% (confidence interval: 14.8-25.9) had speech difficulty after thalamotomy. Speech difficulty occurred in 15% (confidence interval: 9.8-22.2) of those treated with a unilaterally and 40.6% (confidence interval: 29.5-52.8) of those treated bilaterally. Speech impairment was noticed 2- to 3-fold more commonly after left-sided procedures (40.7% vs. 15.2%). Of the 572 patients that underwent DBS, 19.4% (confidence interval: 13.1-27.8) experienced speech difficulty. Subgroup analysis revealed that this complication occurs in 10.2% (confidence interval: 7.4-13.9) of patients treated unilaterally and 34.6% (confidence interval: 21.6-50.4) treated bilaterally. After thalamotomy, the risk was higher in Parkinson's patients compared to patients with essential tremor: 19.8% versus 4.5% in the unilateral group and 42.5% versus 13.9% in the bilateral group. After DBS, this rate was higher in essential tremor patients. Both lesioning and stimulation thalamic surgery produce adverse effects on speech. Left-sided and bilateral procedures are approximately 3-fold more likely to cause speech difficulty. This effect was higher after thalamotomy compared to DBS. In the thalamotomy group, the risk was higher in Parkinson's patients, whereas in the DBS group it was higher in patients with essential tremor. Understanding the pathophysiology of speech

  5. No hate speech movement: Evolving genres and discourses in the European online campaign to fight discrimination and racism

    NARCIS (Netherlands)

    Zollo, S.A.; Loos, E.

    2017-01-01

    In March 2013, the Council of Europe (COE) launched the No Hate Speech Movement, a media youth campaign against hate speech in cyberspace. In this paper, we analyze a corpus collected from the COE’s website. The corpus includes web site pages designed by the COE’s campaigners, as well as materials

  6. No Hate Speech Movement : evolving genres and discourses in the European online campaign to fight discrimination and racism

    NARCIS (Netherlands)

    Zollo, S.A.; Loos, E.F.|info:eu-repo/dai/nl/078758475

    2017-01-01

    In March 2013, the Council of Europe (COE) launched the No Hate Speech Movement, a media youth campaign against hate speech in cyberspace. In this paper, we analyze a corpus collected from the COE’s website. The corpus includes web site pages designed by the COE’s campaigners, as well as materials

  7. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross......Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  8. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  9. Speech motor coordination in Dutch-speaking children with DAS studied with EMMA

    NARCIS (Netherlands)

    Nijland, L.; Maassen, B.A.M.; Hulstijn, W.; Peters, H.F.M.

    2004-01-01

    Developmental apraxia of speech (DAS) is generally classified as a 'speech motor' disorder. Direct measurement of articulatory movement is, however, virtually non-existent. In the present study we investigated the coordination between articulators in children with DAS using kinematic measurements.

  10. Acceptable noise level with Danish, Swedish, and non-semantic speech materials

    DEFF Research Database (Denmark)

    Brännström, K Jonas; Lantz, Johannes; Nielsen, Lars Holme

    2012-01-01

    reported results from American studies. Generally, significant differences were seen between test conditions using different types of noise within ears in each population. Significant differences were seen for ANL across populations, also when the non-semantic ISTS was used as speech signal. Conclusions......Abstract Objective: Acceptable noise level (ANL) has been established as a method to quantify the acceptance of background noise while listening to speech presented at the most comfortable level. The aim of the present study was to generate Danish, Swedish, and a non-semantic version of the ANL...... test and investigate normal-hearing Danish and Swedish subjects' performance on these tests. Design: ANL was measured using Danish and Swedish running speech with two different noises: Speech-weighted amplitude-modulated noise, and multitalker speech babble. ANL was also measured using the non...

  11. Interventions for the management of dry mouth: non-pharmacological interventions.

    Science.gov (United States)

    Furness, Susan; Bryan, Gemma; McMillan, Roddy; Worthington, Helen V

    2013-08-30

    Xerostomia is the subjective sensation of dry mouth. Common causes of xerostomia include adverse effects of many commonly prescribed medications, disease (e.g. Sjogren's Syndrome) and radiotherapy treatment for head and neck cancers. Non-pharmacological techniques such as acupuncture or mild electrostimulation may be used to improve symptoms. To assess the effects of non-pharmacological interventions administered to stimulate saliva production for the relief of dry mouth. We searched the Cochrane Oral Health Group's Trials Register (to 16th April 2013), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 3), MEDLINE via OVID (1948 to 16th April 2013), EMBASE via OVID (1980 to 16th April 2013), AMED via OVID (1985 to 16th April 2013), CINAHL via EBSCO (1981 to 16th April 2013), and CANCERLIT via PubMed (1950 to 16th April 2013). The metaRegister of Controlled Clinical Trials (www.controlled-trials.com) and ClinicalTrials.gov (www.clinicaltrials.gov) were also searched to identify ongoing and completed trials. References lists of included studies and relevant reviews were also searched. There were no restrictions on the language of publication or publication status. We included parallel group randomised controlled trials of non-pharmacological interventions to treat dry mouth, where participants had dry mouth symptoms at baseline. At least two review authors assessed each of the included studies to confirm eligibility, assess risk of bias and extract data using a piloted data extraction form. We calculated mean difference (MD) and 95% confidence intervals (CI) for continuous outcomes or where different scales were used to assess an outcome, we calculated standardised mean differences (SMD) together with 95% CIs. We attempted to extract data on adverse effects of interventions. Where data were missing or unclear we attempted to contact study authors to obtain further information. There were nine studies (total 366

  12. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    Science.gov (United States)

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  13. Magnetoencephalographic study on facial movements

    Directory of Open Access Journals (Sweden)

    Kensaku eMiki

    2014-07-01

    Full Text Available In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by a face contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin and features (eyes, nose, and mouth on processing for static and dynamic face perception. Our results showed the following: (1 In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features, and (2 In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.

  14. Emergence of category-level sensitivities in non-native speech sound learning

    Directory of Open Access Journals (Sweden)

    Emily eMyers

    2014-08-01

    Full Text Available Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

  15. On the Conventionalization of Mouth Actions in Australian Sign Language.

    Science.gov (United States)

    Johnston, Trevor; van Roekel, Jane; Schembri, Adam

    2016-03-01

    This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.

  16. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  17. The effect of mouth leak and humidification during nasal non-invasive ventilation.

    Science.gov (United States)

    Tuggey, Justin M; Delmastro, Monica; Elliott, Mark W

    2007-09-01

    Poor mask fit and mouth leak are associated with nasal symptoms and poor sleep quality in patients receiving domiciliary non-invasive ventilation (NIV) through a nasal mask. Normal subjects receiving continuous positive airways pressure demonstrate increased nasal resistance following periods of mouth leak. This study explores the effect of mouth leak during pressure-targeted nasal NIV, and whether this results in increased nasal resistance and consequently a reduction in effective ventilatory support. A randomised crossover study of 16 normal subjects was performed on separate days. Comparison was made of the effect of 5 min of mouth leak during daytime nasal NIV with and without heated humidification. Expired tidal volume (V(T)), nasal resistance (R(N)), and patient comfort were measured. Mean change (Delta) in V(T) and R(N) were significantly less following mouth leak with heated humidification compared to the without (DeltaV(T) -36+/-65 ml vs. -88+/-50 ml, phumidification (5.3+/-0.4 vs. 6.2+/-0.4, phumidification. In normal subjects, heated humidification during nasal NIV attenuates the adverse effects of mouth leak on effective tidal volume, nasal resistance and improves overall comfort. Heated humidification should be considered as part of an approach to patients who are troubled with nasal symptoms, once leak has been minimised.

  18. Childhood apraxia of speech and multiple phonological disorders in Cairo-Egyptian Arabic speaking children: language, speech, and oro-motor differences.

    Science.gov (United States)

    Aziz, Azza Adel; Shohdi, Sahar; Osman, Dalia Mostafa; Habib, Emad Iskander

    2010-06-01

    Childhood apraxia of speech is a neurological childhood speech-sound disorder in which the precision and consistency of movements underlying speech are impaired in the absence of neuromuscular deficits. Children with childhood apraxia of speech and those with multiple phonological disorder share some common phonological errors that can be misleading in diagnosis. This study posed a question about a possible significant difference in language, speech and non-speech oral performances between children with childhood apraxia of speech, multiple phonological disorder and normal children that can be used for a differential diagnostic purpose. 30 pre-school children between the ages of 4 and 6 years served as participants. Each of these children represented one of 3 possible subject-groups: Group 1: multiple phonological disorder; Group 2: suspected cases of childhood apraxia of speech; Group 3: control group with no communication disorder. Assessment procedures included: parent interviews; testing of non-speech oral motor skills and testing of speech skills. Data showed that children with suspected childhood apraxia of speech showed significantly lower language score only in their expressive abilities. Non-speech tasks did not identify significant differences between childhood apraxia of speech and multiple phonological disorder groups except for those which required two sequential motor performances. In speech tasks, both consonant and vowel accuracy were significantly lower and inconsistent in childhood apraxia of speech group than in the multiple phonological disorder group. Syllable number, shape and sequence accuracy differed significantly in the childhood apraxia of speech group than the other two groups. In addition, children with childhood apraxia of speech showed greater difficulty in processing prosodic features indicating a clear need to address these variables for differential diagnosis and treatment of children with childhood apraxia of speech. Copyright (c

  19. Social Robotics in Therapy of Apraxia of Speech

    Directory of Open Access Journals (Sweden)

    José Carlos Castillo

    2018-01-01

    Full Text Available Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction.

  20. Kinematic Investigation of Lingual Movement in Words of Increasing Length in Acquired Apraxia of Speech

    Science.gov (United States)

    Bartle-Meyer, Carly J.; Goozee, Justine V.; Murdoch, Bruce E.

    2009-01-01

    The current study aimed to use electromagnetic articulography (EMA) to investigate the effect of increasing word length on lingual kinematics in acquired apraxia of speech (AOS). Tongue-tip and tongue-back movement was recorded for five speakers with AOS and a concomitant aphasia (mean age = 53.6 years; SD = 12.60) during target consonant…

  1. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  2. How much does language proficiency by non-native listeners influence speech audiometric tests in noise?

    Science.gov (United States)

    Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger

    2015-01-01

    The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.

  3. Influences of selective adaptation on perception of audiovisual speech

    Science.gov (United States)

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  4. Evaluation of the condylar movement on MRI during maximal mouth opening in patients with internal derangement of TMJ; comparison with transcranial view

    International Nuclear Information System (INIS)

    Cho, Bong Hae

    2001-01-01

    To evaluate the condylar movement at maximal mouth opening on MRI in patients with internal derangement. MR images and transcranial views for 102 TMJs in 51 patients were taken in closed and maximal opening positions, and the amount of condylar movement was analyzed annotatively and qualitatively. For MR images, the mean condylar movements were 9.4 mm horizontally, 4.6 mm vertically and 10.9 mm totally, while those for transcranial views were 12.5 mm, 4.6 mm, and 13.7 mm respectively. The condyle moved forward beyond the summit of the articular eminence in 41 TMJs (40.2%) for MR images than in transcranial views

  5. Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia.

    Science.gov (United States)

    Fridriksson, Julius; Guo, Dazhou; Fillmore, Paul; Holland, Audrey; Rorden, Chris

    2013-11-01

    Non-fluent aphasia implies a relatively straightforward neurological condition characterized by limited speech output. However, it is an umbrella term for different underlying impairments affecting speech production. Several studies have sought the critical lesion location that gives rise to non-fluent aphasia. The results have been mixed but typically implicate anterior cortical regions such as Broca's area, the left anterior insula, and deep white matter regions. To provide a clearer picture of cortical damage in non-fluent aphasia, the current study examined brain damage that negatively influences speech fluency in patients with aphasia. It controlled for some basic speech and language comprehension factors in order to better isolate the contribution of different mechanisms to fluency, or its lack. Cortical damage was related to overall speech fluency, as estimated by clinical judgements using the Western Aphasia Battery speech fluency scale, diadochokinetic rate, rudimentary auditory language comprehension, and executive functioning (scores on a matrix reasoning test) in 64 patients with chronic left hemisphere stroke. A region of interest analysis that included brain regions typically implicated in speech and language processing revealed that non-fluency in aphasia is primarily predicted by damage to the anterior segment of the left arcuate fasciculus. An improved prediction model also included the left uncinate fasciculus, a white matter tract connecting the middle and anterior temporal lobe with frontal lobe regions, including the pars triangularis. Models that controlled for diadochokinetic rate, picture-word recognition, or executive functioning also revealed a strong relationship between anterior segment involvement and speech fluency. Whole brain analyses corroborated the findings from the region of interest analyses. An additional exploratory analysis revealed that involvement of the uncinate fasciculus adjudicated between Broca's and global aphasia

  6. Music and speech distractors disrupt sensorimotor synchronization: effects of musical training.

    Science.gov (United States)

    Białuńska, Anita; Dalla Bella, Simone

    2017-12-01

    Humans display a natural tendency to move to the beat of music, more than to the rhythm of any other auditory stimulus. We typically move with music, but rarely with speech. This proclivity is apparent early during development and can be further developed over the years via joint dancing, singing, or instrument playing. Synchronization of movement to the beat can thus improve with age, but also with musical experience. In a previous study, we found that music perturbed synchronization with a metronome more than speech fragments; music superiority disappeared when distractors shared isochrony and the same meter (Dalla Bella et al., PLoS One 8(8):e71945, 2013). Here, we examined if the interfering effect of music and speech distractors in a synchronization task is influenced by musical training. Musicians and non-musicians synchronized by producing finger force pulses to the sounds of a metronome while music and speech distractors were presented at one of various phase relationships with respect to the target. Distractors were familiar musical excerpts and fragments of children poetry comparable in terms of beat/stress isochrony. Music perturbed synchronization with the metronome more than speech did in both groups. However, the difference in synchronization error between music and speech distractors was smaller for musicians than for non-musicians, especially when the peak force of movement is reached. These findings point to a link between musical training and timing of sensorimotor synchronization when reacting to music and speech distractors.

  7. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech

    Science.gov (United States)

    Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-01-01

    A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production

  8. Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content

    Science.gov (United States)

    Brouwer, Susanne; Van Engen, Kristin J.; Calandruccio, Lauren; Bradlow, Ann R.

    2012-01-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. PMID:22352516

  9. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    Science.gov (United States)

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  10. Enhancement and Noise Statistics Estimation for Non-Stationary Voiced Speech

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    In this paper, single channel speech enhancement in the time domain is considered. We address the problem of modelling non-stationary speech by describing the voiced speech parts by a harmonic linear chirp model instead of using the traditional harmonic model. This means that the speech signal...... through simulations on synthetic and speech signals, that the chirp versions of the filters perform better than their harmonic counterparts in terms of output signal-to-noise ratio (SNR) and signal reduction factor. For synthetic signals, the output SNR for the harmonic chirp APES based filter...... is increased 3 dB compared to the harmonic APES based filter at an input SNR of 10 dB, and at the same time the signal reduction factor is decreased. For speech signals, the increase is 1.5 dB along with a decrease in the signal reduction factor of 0.7. As an implicit part of the APES filter, a noise...

  11. Multisensory integration of speech sounds with letters vs. visual speech : only visual speech induces the mismatch negativity

    NARCIS (Netherlands)

    Stekelenburg, J.J.; Keetels, M.N.; Vroomen, J.H.M.

    2018-01-01

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect.

  12. Speech pattern improvement following gingivectomy of excess palatal tissue.

    Science.gov (United States)

    Holtzclaw, Dan; Toscano, Nicholas

    2008-10-01

    Speech disruption secondary to excessive gingival tissue has received scant attention in periodontal literature. Although a few articles have addressed the causes of this condition, documentation and scientific explanation of treatment outcomes are virtually non-existent. This case report describes speech pattern improvements secondary to periodontal surgery and provides a concise review of linguistic and phonetic literature pertinent to the case. A 21-year-old white female with a history of gingival abscesses secondary to excessive palatal tissue presented for treatment. Bilateral gingivectomies of palatal tissues were performed with inverse bevel incisions extending distally from teeth #5 and #12 to the maxillary tuberosities, and large wedges of epithelium/connective tissue were excised. Within the first month of the surgery, the patient noted "changes in the manner in which her tongue contacted the roof of her mouth" and "changes in her speech." Further anecdotal investigation revealed the patient's enunciation of sounds such as "s," "sh," and "k" was greatly improved following the gingivectomy procedure. Palatometric research clearly demonstrates that the tongue has intimate contact with the lateral aspects of the posterior palate during speech. Gingival excess in this and other palatal locations has the potential to alter linguopalatal contact patterns and disrupt normal speech patterns. Surgical correction of this condition via excisional procedures may improve linguopalatal contact patterns which, in turn, may lead to improved patient speech.

  13. A dynamical model of hierarchical selection and coordination in speech planning.

    Directory of Open Access Journals (Sweden)

    Sam Tilsen

    Full Text Available studies of the control of complex sequential movements have dissociated two aspects of movement planning: control over the sequential selection of movement plans, and control over the precise timing of movement execution. This distinction is particularly relevant in the production of speech: utterances contain sequentially ordered words and syllables, but articulatory movements are often executed in a non-sequential, overlapping manner with precisely coordinated relative timing. This study presents a hybrid dynamical model in which competitive activation controls selection of movement plans and coupled oscillatory systems govern coordination. The model departs from previous approaches by ascribing an important role to competitive selection of articulatory plans within a syllable. Numerical simulations show that the model reproduces a variety of speech production phenomena, such as effects of preparation and utterance composition on reaction time, and asymmetries in patterns of articulatory timing associated with onsets and codas. The model furthermore provides a unified understanding of a diverse group of phonetic and phonological phenomena which have not previously been related.

  14. Estimating feedforward vs. feedback control of speech production through kinematic analyses of unperturbed articulatory movements.

    Science.gov (United States)

    Kim, Kwang S; Max, Ludo

    2014-01-01

    To estimate the contributions of feedforward vs. feedback control systems in speech articulation, we analyzed the correspondence between initial and final kinematics in unperturbed tongue and jaw movements for consonant-vowel (CV) and vowel-consonant (VC) syllables. If movement extents and endpoints are highly predictable from early kinematic information, then the movements were most likely completed without substantial online corrections (feedforward control); if the correspondence between early kinematics and final amplitude or position is low, online adjustments may have altered the planned trajectory (feedback control) (Messier and Kalaska, 1999). Five adult speakers produced CV and VC syllables with high, mid, or low vowels while movements of the tongue and jaw were tracked electromagnetically. The correspondence between the kinematic parameters peak acceleration or peak velocity and movement extent as well as between the articulators' spatial coordinates at those kinematic landmarks and movement endpoint was examined both for movements across different target distances (i.e., across vowel height) and within target distances (i.e., within vowel height). Taken together, results suggest that jaw and tongue movements for these CV and VC syllables are mostly under feedforward control but with feedback-based contributions. One type of feedback-driven compensatory adjustment appears to regulate movement duration based on variation in peak acceleration. Results from a statistical model based on multiple regression are presented to illustrate how the relative strength of these feedback contributions can be estimated.

  15. Speech Errors in Progressive Non-Fluent Aphasia

    Science.gov (United States)

    Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray

    2010-01-01

    The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…

  16. Automated analysis of connected speech reveals early biomarkers of Parkinson's disease in patients with rapid eye movement sleep behaviour disorder.

    Science.gov (United States)

    Hlavnička, Jan; Čmejla, Roman; Tykalová, Tereza; Šonka, Karel; Růžička, Evžen; Rusz, Jan

    2017-02-02

    For generations, the evaluation of speech abnormalities in neurodegenerative disorders such as Parkinson's disease (PD) has been limited to perceptual tests or user-controlled laboratory analysis based upon rather small samples of human vocalizations. Our study introduces a fully automated method that yields significant features related to respiratory deficits, dysphonia, imprecise articulation and dysrhythmia from acoustic microphone data of natural connected speech for predicting early and distinctive patterns of neurodegeneration. We compared speech recordings of 50 subjects with rapid eye movement sleep behaviour disorder (RBD), 30 newly diagnosed, untreated PD patients and 50 healthy controls, and showed that subliminal parkinsonian speech deficits can be reliably captured even in RBD patients, which are at high risk of developing PD or other synucleinopathies. Thus, automated vocal analysis should soon be able to contribute to screening and diagnostic procedures for prodromal parkinsonian neurodegeneration in natural environments.

  17. From mouth to hand: gesture, speech, and the evolution of right-handedness.

    Science.gov (United States)

    Corballis, Michael C

    2003-04-01

    The strong predominance of right-handedness appears to be a uniquely human characteristic, whereas the left-cerebral dominance for vocalization occurs in many species, including frogs, birds, and mammals. Right-handedness may have arisen because of an association between manual gestures and vocalization in the evolution of language. I argue that language evolved from manual gestures, gradually incorporating vocal elements. The transition may be traced through changes in the function of Broca's area. Its homologue in monkeys has nothing to do with vocal control, but contains the so-called "mirror neurons," the code for both the production of manual reaching movements and the perception of the same movements performed by others. This system is bilateral in monkeys, but predominantly left-hemispheric in humans, and in humans is involved with vocalization as well as manual actions. There is evidence that Broca's area is enlarged on the left side in Homo habilis, suggesting that a link between gesture and vocalization may go back at least two million years, although other evidence suggests that speech may not have become fully autonomous until Homo sapiens appeared some 170,000 years ago, or perhaps even later. The removal of manual gesture as a necessary component of language may explain the rapid advance of technology, allowing late migrations of Homo sapiens from Africa to replace all other hominids in other parts of the world, including the Neanderthals in Europe and Homo erectus in Asia. Nevertheless, the long association of vocalization with manual gesture left us a legacy of right-handedness.

  18. A functional near-infrared spectroscopic investigation of speech production during reading.

    Science.gov (United States)

    Wan, Nick; Hancock, Allison S; Moon, Todd K; Gillam, Ronald B

    2018-03-01

    This study was designed to test the extent to which speaking processes related to articulation and voicing influence Functional Near Infrared Spectroscopy (fNIRS) measures of cortical hemodynamics and functional connectivity. Participants read passages in three conditions (oral reading, silent mouthing, and silent reading) while undergoing fNIRS imaging. Area under the curve (AUC) analyses of the oxygenated and deoxygenated hemodynamic response function concentration values were compared for each task across five regions of interest. There were significant region main effects for both oxy and deoxy AUC analyses, and a significant region × task interaction for deoxy AUC favoring the oral reading condition over the silent reading condition for two nonmotor regions. Assessment of functional connectivity using Granger Causality revealed stronger networks between motor areas during oral reading and stronger networks between language areas during silent reading. There was no evidence that the hemodynamic flow from motor areas during oral reading compromised measures of language-related neural activity in nonmotor areas. However, speech movements had small, but measurable effects on fNIRS measures of neural connections between motor and nonmotor brain areas across the perisylvian region, even after wavelet filtering. Therefore, researchers studying speech processes with fNIRS should use wavelet filtering during preprocessing to reduce speech motion artifacts, incorporate a nonspeech communication or language control task into the research design, and conduct a connectivity analysis to adequately assess the impact of functional speech on the hemodynamic response across the perisylvian region. © 2017 Wiley Periodicals, Inc.

  19. 49 CFR 229.9 - Movement of non-complying locomotives.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Movement of non-complying locomotives. 229.9... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS General § 229.9 Movement of non... restrictions necessary for safely conducting the movement; (2)(i) The engineer in charge of the movement of the...

  20. Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO

    Science.gov (United States)

    Tamati, Terrin N.; Pisoni, David B.

    2015-01-01

    Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary

  1. Semantic and phonetic enhancements for speech-in-noise recognition by native and non-native listeners.

    Science.gov (United States)

    Bradlow, Ann R; Alexander, Jennifer A

    2007-04-01

    Previous research has shown that speech recognition differences between native and proficient non-native listeners emerge under suboptimal conditions. Current evidence has suggested that the key deficit that underlies this disproportionate effect of unfavorable listening conditions for non-native listeners is their less effective use of compensatory information at higher levels of processing to recover from information loss at the phoneme identification level. The present study investigated whether this non-native disadvantage could be overcome if enhancements at various levels of processing were presented in combination. Native and non-native listeners were presented with English sentences in which the final word varied in predictability and which were produced in either plain or clear speech. Results showed that, relative to the low-predictability-plain-speech baseline condition, non-native listener final word recognition improved only when both semantic and acoustic enhancements were available (high-predictability-clear-speech). In contrast, the native listeners benefited from each source of enhancement separately and in combination. These results suggests that native and non-native listeners apply similar strategies for speech-in-noise perception: The crucial difference is in the signal clarity required for contextual information to be effective, rather than in an inability of non-native listeners to take advantage of this contextual information per se.

  2. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex.

    Science.gov (United States)

    Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.

  3. Sensorimotor oscillations prior to speech onset reflect altered motor networks in adults who stutter

    Directory of Open Access Journals (Sweden)

    Anna-Maria Mersov

    2016-09-01

    Full Text Available Adults who stutter (AWS have demonstrated atypical coordination of motor and sensory regions during speech production. Yet little is known of the speech-motor network in AWS in the brief time window preceding audible speech onset. The purpose of the current study was to characterize neural oscillations in the speech-motor network during preparation for and execution of overt speech production in AWS using magnetoencephalography (MEG. Twelve AWS and twelve age-matched controls were presented with 220 words, each word embedded in a carrier phrase. Controls were presented with the same word list as their matched AWS participant. Neural oscillatory activity was localized using minimum-variance beamforming during two time periods of interest: speech preparation (prior to speech onset and speech execution (following speech onset. Compared to controls, AWS showed stronger beta (15-25Hz suppression in the speech preparation stage, followed by stronger beta synchronization in the bilateral mouth motor cortex. AWS also recruited the right mouth motor cortex significantly earlier in the speech preparation stage compared to controls. Exaggerated motor preparation is discussed in the context of reduced coordination in the speech-motor network of AWS. It is further proposed that exaggerated beta synchronization may reflect a more strongly inhibited motor system that requires a stronger beta suppression to disengage prior to speech initiation. These novel findings highlight critical differences in the speech-motor network of AWS that occur prior to speech onset and emphasize the need to investigate further the speech-motor assembly in the stuttering population.

  4. LSVT LOUD and LSVT BIG: Behavioral Treatment Programs for Speech and Body Movement in Parkinson Disease

    Directory of Open Access Journals (Sweden)

    Cynthia Fox

    2012-01-01

    Full Text Available Recent advances in neuroscience have suggested that exercise-based behavioral treatments may improve function and possibly slow progression of motor symptoms in individuals with Parkinson disease (PD. The LSVT (Lee Silverman Voice Treatment Programs for individuals with PD have been developed and researched over the past 20 years beginning with a focus on the speech motor system (LSVT LOUD and more recently have been extended to address limb motor systems (LSVT BIG. The unique aspects of the LSVT Programs include the combination of (a an exclusive target on increasing amplitude (loudness in the speech motor system; bigger movements in the limb motor system, (b a focus on sensory recalibration to help patients recognize that movements with increased amplitude are within normal limits, even if they feel “too loud” or “too big,” and (c training self-cueing and attention to action to facilitate long-term maintenance of treatment outcomes. In addition, the intensive mode of delivery is consistent with principles that drive activity-dependent neuroplasticity and motor learning. The purpose of this paper is to provide an integrative discussion of the LSVT Programs including the rationale for their fundamentals, a summary of efficacy data, and a discussion of limitations and future directions for research.

  5. Development of a Low-Cost, Noninvasive, Portable Visual Speech Recognition Program.

    Science.gov (United States)

    Kohlberg, Gavriel D; Gal, Ya'akov Kobi; Lalwani, Anil K

    2016-09-01

    Loss of speech following tracheostomy and laryngectomy severely limits communication to simple gestures and facial expressions that are largely ineffective. To facilitate communication in these patients, we seek to develop a low-cost, noninvasive, portable, and simple visual speech recognition program (VSRP) to convert articulatory facial movements into speech. A Microsoft Kinect-based VSRP was developed to capture spatial coordinates of lip movements and translate them into speech. The articulatory speech movements associated with 12 sentences were used to train an artificial neural network classifier. The accuracy of the classifier was then evaluated on a separate, previously unseen set of articulatory speech movements. The VSRP was successfully implemented and tested in 5 subjects. It achieved an accuracy rate of 77.2% (65.0%-87.6% for the 5 speakers) on a 12-sentence data set. The mean time to classify an individual sentence was 2.03 milliseconds (1.91-2.16). We have demonstrated the feasibility of a low-cost, noninvasive, portable VSRP based on Kinect to accurately predict speech from articulation movements in clinically trivial time. This VSRP could be used as a novel communication device for aphonic patients. © The Author(s) 2016.

  6. Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers.

    Science.gov (United States)

    Schaadt, Gesa; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Männel, Claudia

    2018-01-17

    During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  8. Clinical evidence of the role of the cerebellum in the suppression of overt articulatory movements during reading. A study of reading in children and adolescents treated for cerebellar pilocytic astrocytoma.

    Science.gov (United States)

    Ait Khelifa-Gallois, N; Puget, S; Longaud, A; Laroussinie, F; Soria, C; Sainte-Rose, C; Dellatolas, G

    2015-04-01

    It has been suggested that the cerebellum is involved in reading acquisition and in particular in the progression from automatic grapheme-phoneme conversion to the internalization of speech required for silent reading. This idea is in line with clinical and neuroimaging data showing a cerebellar role in subvocal rehearsal for printed verbalizable material and with computational "internal models" of the cerebellum suggesting its role in inner speech (i.e. covert speech without mouthing the words). However, studies examining a possible cerebellar role in the suppression of articulatory movements during silent reading acquisition in children are lacking. Here, we report clinical evidence that the cerebellum plays a part in this transition. Reading performances were compared between a group of 17 paediatric patients treated for benign cerebellar tumours and a group of controls matched for age, gender, and parental socio-educational level. The patients scored significantly lower on all reading, but the most striking difference concerned silent reading, perfectly acquired by almost all controls, contrasting with 41 % of the patients who were unable to read any item silently. Silent reading was correlated with the Working Memory Index. The present findings converge with previous reports on an implication of the cerebellum in inner speech and in the automatization of reading. This cerebellar implication is probably not specific to reading, as it also seems to affect non-reading tasks such as counting.

  9. Speech-associated gestures, Broca’s area, and the human mirror system

    Science.gov (United States)

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L

    2009-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements. PMID:17533001

  10. Non-proliferation and nuclear disarmament: speech of the president Obama at Prague

    International Nuclear Information System (INIS)

    Hautecouverture, B.

    2009-01-01

    Introduced by the Prague speech of april 7 2009, the Obama President program towards the non proliferation and the nuclear disarmament was pointed out by its optimism ambition and determination. But a more detailed lecture shows concurrent positions. The author analyzes the political aspects of the President speech. (A.L.B.)

  11. End-to-end visual speech recognition with LSTMS

    NARCIS (Netherlands)

    Petridis, Stavros; Li, Zuwei; Pantic, Maja

    2017-01-01

    Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on

  12. 49 CFR 230.12 - Movement of non-complying steam locomotives.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Movement of non-complying steam locomotives. 230... General General Inspection Requirements § 230.12 Movement of non-complying steam locomotives. (a) General limitations on movement. A steam locomotive with one or more non-complying conditions may be moved only as a...

  13. Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia

    OpenAIRE

    Fridriksson, Julius; Guo, Dazhou; Fillmore, Paul; Holland, Audrey; Rorden, Chris

    2013-01-01

    Non-fluent aphasia implies a relatively straightforward neurological condition characterized by limited speech output. However, it is an umbrella term for different underlying impairments affecting speech production. Several studies have sought the critical lesion location that gives rise to non-fluent aphasia. The results have been mixed but typically implicate anterior cortical regions such as Broca’s area, the left anterior insula, and deep white matter regions. To provide a clearer pictur...

  14. Religion, hate speech, and non-domination

    OpenAIRE

    Bonotti, Matteo

    2017-01-01

    In this paper I argue that one way of explaining what is wrong with hate speech is by critically assessing what kind of freedom free speech involves and, relatedly, what kind of freedom hate speech undermines. More specifically, I argue that the main arguments for freedom of speech (e.g. from truth, from autonomy, and from democracy) rely on a “positive” conception of freedom intended as autonomy and self-mastery (Berlin, 2006), and can only partially help us to understand what is wrong with ...

  15. The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception.

    Science.gov (United States)

    Buchan, Julie N; Paré, Martin; Munhall, Kevin G

    2008-11-25

    During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.

  16. Analyzing non-respiratory movements of the chest: methods and devices

    Science.gov (United States)

    Pariaszewska, Katarzyna; Młyńczak, Marcel; Cybulski, Gerard

    2015-09-01

    Respiration is the main reason of the chest movements. However, there are also non-respiratory ones, resulting from e.g. snoring, wheezing, stridor, throat clearing or coughing. They may exist sporadically, however should be examined in case when their incidences increase. Detecting non-respiratory movements is very important, because many of them are symptoms of respiratory diseases such as asthma, chronic obstructive pulmonary disease (COPD) or lung cancer. Assessment of the presence of non-respiratory movements could be important element of effective diagnosis. It is also necessary to provide quantitative and objective results for intra-subject studies. Most of these events generate vibroacoustic signals that contain components of sound and vibrations. This work provides the review of the solutions and devices for monitoring of the non-respiratory movements, primarily considering the accuracy of the chest movements' detection and distinguishing.

  17. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    Science.gov (United States)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  18. Limited evidence for non-pharmacological interventions for the relief of dry mouth.

    Science.gov (United States)

    Bakarman, Eman O; Keenan, Analia Veitz

    2014-03-01

    The Cochrane Oral Health Group's Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Medline, Embase, AMED, CINAHL and CANCERLIT databases were searched. The metaRegister of Controlled Clinical Trials and ClinicalTrials.gov were also searched to identify ongoing and completed trials. Reference lists of included studies and relevant reviews were also searched. There were no restrictions on the language of publication or publication status. Randomised controlled trials of non-pharmacological treatments for patients with dry mouth at baseline. Study assessment and data extraction were carried out independently by at least two reviewers. Mean difference (MD) and standardised mean differences (SMD) together with 95% CIs were calculated where appropriate. Nine studies (366 participants) were included in this review, eight were assessed at high risk of bias and one at unclear risk of bias. Five small studies (153 participants), with dry mouth following radiotherapy treatment compared acupuncture with placebo. Four were at high risk and one at unclear risk of bias. Two trials reported outcome data for dry mouth in a form suitable for meta- analysis. The pooled estimate of these two trials (70 participants, low quality evidence) showed no difference between acupuncture and control in dry mouth symptoms (SMD -0.34, 95% CI -0.81 to 0.14, P value 0.17, I2 = 39%) with the confidence intervals including both a possible reduction or a possible increase in dry mouth symptoms.Acupuncture was associated with more adverse effects (tiny bruises and tiredness which were mild and temporary). There was a very small increase in unstimulated whole saliva (UWS) at the end of four to six weeks of treatment (three trials, 71 participants, low quality evidence) (MD 0.02 ml/minute, 95% CI 0 to 0.04, P value 0.04, I2 = 57%), and this benefit persisted at the 12-month follow-up evaluation (two trials, 54 participants, low quality evidence) (UWS, MD 0.06 ml/minute, 95

  19. Co-speech gestures influence neural activity in brain regions associated with processing semantic information.

    Science.gov (United States)

    Dick, Anthony Steven; Goldin-Meadow, Susan; Hasson, Uri; Skipper, Jeremy I; Small, Steven L

    2009-11-01

    Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.

  20. Characterization of the voice of children with mouth breathing caused by four different etiologies using perceptual and acoustic analyses

    Directory of Open Access Journals (Sweden)

    Rosana Tiepo Arévalo

    2005-09-01

    Full Text Available Objective: To describe vocal characteristics in children aged fiveto twelve years with mouth breathing caused by four etiologies:chronic rhinitis, hypertrophy, hypertrophy + chronic rhinitis andfunctional condition, using perceptual evaluation and acousticanalysis. Methods: Voice recordings of 120 mouth breathers judgedby four speech pathologists using the software Multi-Speech.Results: The perceptual evaluation of the voice revealed highincidence of breathy and hoarse voices, especially in the rhinitisgroup. Most cases were moderate, with low pitch and normalloudness. Hyponasality was found in over 50% of sample, asexpected, but we also found high occurrence of laryngealresonance, especially in the rhinitis group. Mean fundamentalfrequency was 24.81Hz, SD = 15.02; jitter = 2.17; shimmer =0.44, and HNR = 2.11. Values did not show statistically significantdifference among the groups. Conclusion: Perceptual evaluation ofthe voice revealed that most mouth breathers presented hoarseand breathy voice, low pitch, normal loudness and hyponasal andlaryngeal resonance. However, the acoustic analysis did not resultin any significant condition.

  1. DAKWAH MOVEMENT FOR MUSLIM MINORITY IN KUPANG, EAST NUSA TENGGARA AND ITS ANTICIPATION FROM HATE SPEECH

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin Eko Putro

    2018-11-01

    Full Text Available Islamic proselytizing or dakwah in Kupang, East Nusa Tenggara has been still persisted today. It targets solely for Muslim. Islamic proselytizing for non-Muslim is impossible because of the minority of Muslim in total number in this city. Technically, religious teaching doesn’t use loud speaker except for azan calling and iqamat. Dakwah activist in Kupang usually tries to hinder the possibility of hate speech that possibly sounded by Muslim clerics. In addition, there is a local mechanism run by mosque management for not tolerate of hate speech by issuing a set of guidance for preaching. That issued guidance and then sending it several days before helps preacher for hindrance of hate speech. This research done with qualitative method wants to elaborate and to know to what extent the Islamic preaching dealing with hate speech phenomenon di Kupang city where Muslim is set as minority in number. Some data gathering methods are used including in-depth interview, observation and literature study.

  2. The Relationship Between Apraxia of Speech and Oral Apraxia: Association or Dissociation?

    Science.gov (United States)

    Whiteside, Sandra P; Dyson, Lucy; Cowell, Patricia E; Varley, Rosemary A

    2015-11-01

    Acquired apraxia of speech (AOS) is a motor speech disorder that affects the implementation of articulatory gestures and the fluency and intelligibility of speech. Oral apraxia (OA) is an impairment of nonspeech volitional movement. Although many speakers with AOS also display difficulties with volitional nonspeech oral movements, the relationship between the 2 conditions is unclear. This study explored the relationship between speech and volitional nonspeech oral movement impairment in a sample of 50 participants with AOS. We examined levels of association and dissociation between speech and OA using a battery of nonspeech oromotor, speech, and auditory/aphasia tasks. There was evidence of a moderate positive association between the 2 impairments across participants. However, individual profiles revealed patterns of dissociation between the 2 in a few cases, with evidence of double dissociation of speech and oral apraxic impairment. We discuss the implications of these relationships for models of oral motor and speech control. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  4. The Non-Native English Speaker Teachers in TESOL Movement

    Science.gov (United States)

    Kamhi-Stein, Lía D.

    2016-01-01

    It has been almost 20 years since what is known as the non-native English-speaking (NNES) professionals' movement--designed to increase the status of NNES professionals--started within the US-based TESOL International Association. However, still missing from the literature is an understanding of what a movement is, and why non-native English…

  5. The effect of Phonological Encoding Complexity on Speech Fluency of Stuttering and Non-Stuttering Children

    Directory of Open Access Journals (Sweden)

    Sara Ramezani

    2012-01-01

    Full Text Available Objective: Stuttering is a fairly common speech disorder. However, the etiology is poorly understood and is likely to be heterogeneous. The aim of this research is to investigate phonological encoding complexity on speech fluency in 6-9 year old stuttering children in comparison with non-stutterers in Tehran. Materials & Methods: This cross-sectional, descriptive analytic research was done on 18 stuttering children with profound and severe level and 18 non-stuttering children. The stuttering subjects were selected by convenience and normal subjects were matched to stuttering subjects by gender, age and geographics. A non-word test comprising 87 non-words was used to investigate phonological encoding and phonological complexity effects on speech fluency. Stimuli were presented in random order with approximately 5 seconds between items, using a computer via external Toshiba SOMIC SM-818 headphone and requested subject was asked to repeat them.  Results: The results indicated that speech fluency decreased significantly (P<0.05 by increasing phonological complexity comparing to controls. Conclusion: The findings of the present research seem to suggest that, stuttering children may have deficits in phonological encoding. The deficit has been increased with phonological encoding complexity. Based on covert repair hypothesis, phonological difficulty may cause covert self- repair and leads to different patterns of stuttering.

  6. Orthography and Modality Influence Speech Production in Adults and Children.

    Science.gov (United States)

    Saletta, Meredith; Goffman, Lisa; Hogan, Tiffany P

    2016-12-01

    The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Children (n = 17), adults with typical reading skills (n = 17), and adults demonstrating low reading proficiency (n = 18) repeated or read aloud nonwords varying in orthographic transparency. Analyses of implicit linguistic processing (segmental accuracy and speech movement stability) were conducted. The accuracy and articulatory stability of productions of the nonwords were assessed before and after repetition or reading. Segmental accuracy results indicate that all 3 groups demonstrated greater learning when they were able to read, rather than just hear, the nonwords. Speech movement results indicate that, for adults with poor reading skills, exposure to the nonwords in a transparent spelling reduces the articulatory variability of speech production. Reading skill was correlated with speech movement stability in the groups of adults. In children and adults, orthography interacts with speech production; all participants integrate orthography into their lexical representations. Adults with poor reading skills do not use the same reading or speaking strategies as children with typical reading skills.

  7. Common cues to emotion in the dynamic facial expressions of speech and song.

    Science.gov (United States)

    Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

  8. Prosodic influences on speech production in children with specific language impairment and speech deficits: kinematic, acoustic, and transcription evidence.

    Science.gov (United States)

    Goffman, L

    1999-12-01

    It is often hypothesized that young children's difficulties with producing weak-strong (iambic) prosodic forms arise from perceptual or linguistically based production factors. A third possible contributor to errors in the iambic form may be biological constraints, or biases, of the motor system. In the present study, 7 children with specific language impairment (SLI) and speech deficits were matched to same age peers. Multiple levels of analysis, including kinematic (modulation and stability of movement), acoustic, and transcription, were applied to children's productions of iambic (weak-strong) and trochaic (strong-weak) prosodic forms. Findings suggest that a motor bias toward producing unmodulated rhythmic articulatory movements, similar to that observed in canonical babbling, contribute to children's acquisition of metrical forms. Children with SLI and speech deficits show less mature segmental and speech motor systems, as well as decreased modulation of movement in later developing iambic forms. Further, components of prosodic and segmental acquisition develop independently and at different rates.

  9. Lexical effects on speech production and intelligibility in Parkinson's disease

    Science.gov (United States)

    Chiu, Yi-Fang

    Individuals with Parkinson's disease (PD) often have speech deficits that lead to reduced speech intelligibility. Previous research provides a rich database regarding the articulatory deficits associated with PD including restricted vowel space (Skodda, Visser, & Schlegel, 2011) and flatter formant transitions (Tjaden & Wilding, 2004; Walsh & Smith, 2012). However, few studies consider the effect of higher level structural variables of word usage frequency and the number of similar sounding words (i.e. neighborhood density) on lower level articulation or on listeners' perception of dysarthric speech. The purpose of the study is to examine the interaction of lexical properties and speech articulation as measured acoustically in speakers with PD and healthy controls (HC) and the effect of lexical properties on the perception of their speech. Individuals diagnosed with PD and age-matched healthy controls read sentences with words that varied in word frequency and neighborhood density. Acoustic analysis was performed to compare second formant transitions in diphthongs, an indicator of the dynamics of tongue movement during speech production, across different lexical characteristics. Young listeners transcribed the spoken sentences and the transcription accuracy was compared across lexical conditions. The acoustic results indicate that both PD and HC speakers adjusted their articulation based on lexical properties but the PD group had significant reductions in second formant transitions compared to HC. Both groups of speakers increased second formant transitions for words with low frequency and low density, but the lexical effect is diphthong dependent. The change in second formant slope was limited in the PD group when the required formant movement for the diphthong is small. The data from listeners' perception of the speech by PD and HC show that listeners identified high frequency words with greater accuracy suggesting the use of lexical knowledge during the

  10. Evolution of non-speech sound memory in postlingual deafness: implications for cochlear implant rehabilitation.

    Science.gov (United States)

    Lazard, D S; Giraud, A L; Truy, E; Lee, H J

    2011-07-01

    Neurofunctional patterns assessed before or after cochlear implantation (CI) are informative markers of implantation outcome. Because phonological memory reorganization in post-lingual deafness is predictive of the outcome, we investigated, using a cross-sectional approach, whether memory of non-speech sounds (NSS) produced by animals or objects (i.e. non-human sounds) is also reorganized, and how this relates to speech perception after CI. We used an fMRI auditory imagery task in which sounds were evoked by pictures of noisy items for post-lingual deaf candidates for CI and for normal-hearing subjects. When deaf subjects imagined sounds, the left inferior frontal gyrus, the right posterior temporal gyrus and the right amygdala were less activated compared to controls. Activity levels in these regions decreased with duration of auditory deprivation, indicating declining NSS representations. Whole brain correlations with duration of auditory deprivation and with speech scores after CI showed an activity decline in dorsal, fronto-parietal, cortical regions, and an activity increase in ventral cortical regions, the right anterior temporal pole and the hippocampal gyrus. Both dorsal and ventral reorganizations predicted poor speech perception outcome after CI. These results suggest that post-CI speech perception relies, at least partially, on the integrity of a neural system used for processing NSS that is based on audio-visual and articulatory mapping processes. When this neural system is reorganized, post-lingual deaf subjects resort to inefficient semantic- and memory-based strategies. These results complement those of other studies on speech processing, suggesting that both speech and NSS representations need to be maintained during deafness to ensure the success of CI. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  12. Treating speech subsystems in childhood apraxia of speech with tactual input: the PROMPT approach.

    Science.gov (United States)

    Dale, Philip S; Hayden, Deborah A

    2013-11-01

    Prompts for Restructuring Oral Muscular Phonetic Targets (PROMPT; Hayden, 2004; Hayden, Eigen, Walker, & Olsen, 2010)-a treatment approach for the improvement of speech sound disorders in children-uses tactile-kinesthetic- proprioceptive (TKP) cues to support and shape movements of the oral articulators. No research to date has systematically examined the efficacy of PROMPT for children with childhood apraxia of speech (CAS). Four children (ages 3;6 [years;months] to 4;8), all meeting the American Speech-Language-Hearing Association (2007) criteria for CAS, were treated using PROMPT. All children received 8 weeks of 2 × per week treatment, including at least 4 weeks of full PROMPT treatment that included TKP cues. During the first 4 weeks, 2 of the 4 children received treatment that included all PROMPT components except TKP cues. This design permitted both between-subjects and within-subjects comparisons to evaluate the effect of TKP cues. Gains in treatment were measured by standardized tests and by criterion-referenced measures based on the production of untreated probe words, reflecting change in speech movements and auditory perceptual accuracy. All 4 children made significant gains during treatment, but measures of motor speech control and untreated word probes provided evidence for more gain when TKP cues were included. PROMPT as a whole appears to be effective for treating children with CAS, and the inclusion of TKP cues appears to facilitate greater effect.

  13. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  14. Non-native Speech Learning in Older Adults.

    Science.gov (United States)

    Ingvalson, Erin M; Nowicki, Casandra; Zong, Audrey; Wong, Patrick C M

    2017-01-01

    Though there is an extensive literature investigating the ability of younger adults to learn non-native phonology, including investigations into individual differences in younger adults' lexical tone learning, very little is known about older adults' ability to learn non-native phonology, including lexical tone. There are several reasons to suspect that older adults would use different learning mechanisms when learning lexical tone than younger adults, including poorer perception of dynamic pitch, greater reliance on working memory capacity in second language learning, and poorer category learning in older adulthood. The present study examined the relationships among older adults' baseline sensitivity for pitch patterns, working memory capacity, and declarative memory capacity with their ability to learn to associate tone with lexical meaning. In older adults, baseline pitch pattern sensitivity was not associated with generalization performance. Rather, older adults' learning performance was best predicted by declarative memory capacity. These data suggest that training paradigms will need to be modified to optimize older adults' non-native speech sound learning success.

  15. Inter- and intra-rater reliability of 3D kinematics during maximum mouth opening of asymptomatic subjects.

    Science.gov (United States)

    Calixtre, Leticia Bojikian; Nakagawa, Theresa Helissa; Alburquerque-Sendín, Francisco; da Silva Grüninger, Bruno Leonardo; de Sena Rosa, Lianna Ramalho; Oliveira, Ana Beatriz

    2017-11-07

    Previous studies evaluated 3D human jaw movements using kinematic analysis systems during mouth opening, but information on the reliability of such measurements is still scarce. The purpose of this study was to analyze within- and between-session reliabilities, inter-rater reliability, standard error of measurement (SEM), minimum detectable change (MDC) and consistency of agreement across raters and sessions of 3D kinematic variables during maximum mouth opening (MMO). Thirty-six asymptomatic subjects from both genders were evaluated on two different days, five to seven days apart. Subjects performed three MMO movements while kinematic data were collected. Intraclass correlation coefficient (ICC), SEM and MDC were calculated for all variables, and Bland-Altman plots were constructed. Jaw radius and width were the most reproducible variables (ICC>0.81) and demonstrated minor error. Incisor displacement during MMO and angular movements in the sagittal plane presented good reliability (ICC from 0.61 to 0.8) and small errors and, consequently, could be used in future studies with the same methodology and population. The variables with smaller amplitudes (condylar translations during mouth opening and closing and mandibular movements on the frontal and transversal planes) were less reliable (ICCmandibular movements in the frontal and transversal planes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  17. SPEECH ACT ANALYSIS: HOSNI MUBARAK'S SPEECHES IN PRE ...

    African Journals Online (AJOL)

    enerco

    from movements of certain organs with his (man‟s) throat and mouth…. By means ... In other words, government engages language; and how this affects the ... address the audience in a social gathering in order to have a new dawn. ..... Agbedo, C. U. Speech Act Analysis of Political discourse in the Nigerian Print Media in.

  18. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  19. Determination of mandibular border and functional movement protocols using an electromagnetic articulograph (EMA).

    Science.gov (United States)

    Fuentes, Ramon; Navarro, Pablo; Curiqueo, Aldo; Ottone, Nicolas E

    2015-01-01

    The electromagnetic articulograph (EMA) is a device that can collect movement data by positioning sensors at multiple points, measuring displacements of the structure in real time, as well as the acoustics and mechanics of speech using a microphone connected to the measurement system. The aim of this study is to describe protocols for the generation, measurement and visualization of mandibular border and functional movements in the three spatial planes (frontal, sagittal and horizontal) using the EMA. The EMA has transmitter coils that determine magnetic fields to collect information about movements from sensors located on different structures (tongue, palate, mouth, incisors, skin, etc.) and in every direction in an area of 300 mm. After measurement with the EMA, the information is transferred to a computer and read with the Visartico software to visualize the recording of the mandibular movements registered by the EMA. The sensors placed in the space between the three axes XYZ are observed, and then the plots created from the mandibular movements included in the corresponding protocol can be visualized, enabling interpretation of these data. Four protocols for the obtaining of images of the opening and closing mandibular movements were defined and developed, as well as border movements in the frontal, sagittal and horizontal planes, managing to accurately reproduce Posselt's diagram and Gothic arch on the latter two axes. Measurements with the EMA will allow more exact data to be collected in relation to the mandibular clinical physiology and morphology, which will permit more accurate diagnoses and application of more precise and adjusted treatments in the future.

  20. Determination of mandibular border and functional movement protocols using an electromagnetic articulograph (EMA)

    Science.gov (United States)

    Fuentes, Ramon; Navarro, Pablo; Curiqueo, Aldo; Ottone, Nicolas E

    2015-01-01

    The electromagnetic articulograph (EMA) is a device that can collect movement data by positioning sensors at multiple points, measuring displacements of the structure in real time, as well as the acoustics and mechanics of speech using a microphone connected to the measurement system. The aim of this study is to describe protocols for the generation, measurement and visualization of mandibular border and functional movements in the three spatial planes (frontal, sagittal and horizontal) using the EMA. The EMA has transmitter coils that determine magnetic fields to collect information about movements from sensors located on different structures (tongue, palate, mouth, incisors, skin, etc.) and in every direction in an area of 300 mm. After measurement with the EMA, the information is transferred to a computer and read with the Visartico software to visualize the recording of the mandibular movements registered by the EMA. The sensors placed in the space between the three axes XYZ are observed, and then the plots created from the mandibular movements included in the corresponding protocol can be visualized, enabling interpretation of these data. Four protocols for the obtaining of images of the opening and closing mandibular movements were defined and developed, as well as border movements in the frontal, sagittal and horizontal planes, managing to accurately reproduce Posselt’s diagram and Gothic arch on the latter two axes. Measurements with the EMA will allow more exact data to be collected in relation to the mandibular clinical physiology and morphology, which will permit more accurate diagnoses and application of more precise and adjusted treatments in the future. PMID:26884903

  1. Sadness is unique: Neural processing of emotions in speech prosody in musicians and non-musicians

    Directory of Open Access Journals (Sweden)

    Mona ePark

    2015-01-01

    Full Text Available Musical training has been shown to have positive effects on several aspects of speech processing, however, the effects of musical training on the neural processing of speech prosody conveying distinct emotions are yet to be better understood. We used functional magnetic resonance imaging (fMRI to investigate whether the neural responses to speech prosody conveying happiness, sadness, and fear differ between musicians and non-musicians. Differences in processing of emotional speech prosody between the two groups were only observed when sadness was expressed. Musicians showed increased activation in the middle frontal gyrus, the anterior medial prefrontal cortex, the posterior cingulate cortex and the retrosplenial cortex. Our results suggest an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes.

  2. Movement Issues Identified in Movement ABC2 Checklist Parent Ratings for Students with Persisting Dysgraphia, Dyslexia, and OWL LD and Typical Literacy Learners

    Science.gov (United States)

    Nielsen, Kathleen; Henderson, Sheila; Barnett, Anna L.; Abbott, Robert D.; Berninger, Virginia

    2018-01-01

    Movement, which draws on motor skills and executive functions for managing them, plays an important role in literacy learning (e.g., movement of mouth during oral reading and movement of hand and fingers during writing); but relatively little research has focused on movement skills in students with specific learning disabilities as the current…

  3. Does dynamic information about the speaker's face contribute to semantic speech processing? ERP evidence.

    Science.gov (United States)

    Hernández-Gutiérrez, David; Abdel Rahman, Rasha; Martín-Loeches, Manuel; Muñoz, Francisco; Schacht, Annekathrin; Sommer, Werner

    2018-07-01

    Face-to-face interactions characterize communication in social contexts. These situations are typically multimodal, requiring the integration of linguistic auditory input with facial information from the speaker. In particular, eye gaze and visual speech provide the listener with social and linguistic information, respectively. Despite the importance of this context for an ecological study of language, research on audiovisual integration has mainly focused on the phonological level, leaving aside effects on semantic comprehension. Here we used event-related potentials (ERPs) to investigate the influence of facial dynamic information on semantic processing of connected speech. Participants were presented with either a video or a still picture of the speaker, concomitant to auditory sentences. Along three experiments, we manipulated the presence or absence of the speaker's dynamic facial features (mouth and eyes) and compared the amplitudes of the semantic N400 elicited by unexpected words. Contrary to our predictions, the N400 was not modulated by dynamic facial information; therefore, semantic processing seems to be unaffected by the speaker's gaze and visual speech. Even though, during the processing of expected words, dynamic faces elicited a long-lasting late posterior positivity compared to the static condition. This effect was significantly reduced when the mouth of the speaker was covered. Our findings may indicate an increase of attentional processing to richer communicative contexts. The present findings also demonstrate that in natural communicative face-to-face encounters, perceiving the face of a speaker in motion provides supplementary information that is taken into account by the listener, especially when auditory comprehension is non-demanding. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. High-frequency energy in singing and speech

    Science.gov (United States)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  5. Acceptable noise level (ANL) with Danish and non-semantic speech materials in adult hearing-aid users

    DEFF Research Database (Denmark)

    Olsen, Steen Østergaard; Lantz, Johannes; Nielsen, Lars Holme

    2012-01-01

    The acceptable noise level (ANL) test is used for quantification of the amount of background noise subjects accept when listening to speech. This study investigates Danish hearing-aid users' ANL performance using Danish and non-semantic speech signals, the repeatability of ANL, and the association...

  6. Tools for the assessment of childhood apraxia of speech.

    Science.gov (United States)

    Gubiani, Marileda Barichello; Pagliarin, Karina Carlesso; Keske-Soares, Marcia

    2015-01-01

    This study systematically reviews the literature on the main tools used to evaluate childhood apraxia of speech (CAS). The search strategy includes Scopus, PubMed, and Embase databases. Empirical studies that used tools for assessing CAS were selected. Articles were selected by two independent researchers. The search retrieved 695 articles, out of which 12 were included in the study. Five tools were identified: Verbal Motor Production Assessment for Children, Dynamic Evaluation of Motor Speech Skill, The Orofacial Praxis Test, Kaufman Speech Praxis Test for Children, and Madison Speech Assessment Protocol. There are few instruments available for CAS assessment and most of them are intended to assess praxis and/or orofacial movements, sequences of orofacial movements, articulation of syllables and phonemes, spontaneous speech, and prosody. There are some tests for assessment and diagnosis of CAS. However, few studies on this topic have been conducted at the national level, as well as protocols to assess and assist in an accurate diagnosis.

  7. Speech and neurology-chemical impairment correlates

    Science.gov (United States)

    Hayre, Harb S.

    2002-05-01

    Speech correlates of alcohol/drug impairment and its neurological basis is presented with suggestion for further research in impairment from poly drug/medicine/inhalent/chew use/abuse, and prediagnosis of many neuro- and endocrin-related disorders. Nerve cells all over the body detect chemical entry by smoking, injection, drinking, chewing, or skin absorption, and transmit neurosignals to their corresponding cerebral subsystems, which in turn affect speech centers-Broca's and Wernick's area, and motor cortex. For instance, gustatory cells in the mouth, cranial and spinal nerve cells in the skin, and cilia/olfactory neurons in the nose are the intake sensing nerve cells. Alcohol depression, and brain cell damage were detected from telephone speech using IMPAIRLYZER-TM, and the results of these studies were presented at 1996 ASA meeting in Indianapolis, and 2001 German Acoustical Society-DEGA conference in Hamburg, Germany respectively. Speech based chemical Impairment measure results were presented at the 2001 meeting of ASA in Chicago. New data on neurotolerance based chemical impairment for alcohol, drugs, and medicine shall be presented, and shown not to fully support NIDA-SAMSHA drug and alcohol threshold used in drug testing domain.

  8. Trench mouth

    Science.gov (United States)

    ... gingivae). The term trench mouth comes from World War I, when this infection was common among soldiers " ... mouth include: Emotional stress Poor oral hygiene Poor nutrition Smoking Throat, tooth, or mouth infections Trench mouth ...

  9. Innovative Speech Reconstructive Surgery

    OpenAIRE

    Hashem Shemshadi

    2003-01-01

    Proper speech functioning in human being, depends on the precise coordination and timing balances in a series of complex neuro nuscular movements and actions. Starting from the prime organ of energy source of expelled air from respirato y system; deliver such air to trigger vocal cords; swift changes of this phonatory episode to a comprehensible sound in RESONACE and final coordination of all head and neck structures to elicit final speech in ...

  10. Burning Mouth Syndrome and "Burning Mouth Syndrome".

    Science.gov (United States)

    Rifkind, Jacob Bernard

    2016-03-01

    Burning mouth syndrome is distressing to both the patient and practitioner unable to determine the cause of the patient's symptoms. Burning mouth syndrome is a diagnosis of exclusion, which is used only after nutritional deficiencies, mucosal disease, fungal infections, hormonal disturbances and contact stomatitis have been ruled out. This article will explore the many causes and treatment of patients who present with a chief complaint of "my mouth burns," including symptomatic treatment for those with burning mouth syndrome.

  11. [Attention deficit and understanding of non-literal meanings: the interpretation of indirect speech acts and idioms].

    Science.gov (United States)

    Crespo, N; Manghi, D; García, G; Cáceres, P

    To report on the oral comprehension of the non-literal meanings of indirect speech acts and idioms in everyday speech by children with attention deficit hyperactivity disorder (ADHD). The subjects in this study consisted of a sample of 29 Chilean schoolchildren aged between 6 and 13 with ADHD and a control group of children without ADHD sharing similar socio-demographic characteristics. A quantitative method was utilised: comprehension was measured individually by means of an interactive instrument. The children listened to a dialogue taken from a cartoon series that included indirect speech acts and idioms and they had to choose one of the three options they were given: literal, non-literal or distracter. The children without ADHD identified the non-literal meaning more often, especially in idioms. Likewise, it should be pointed out that whereas the children without ADHD increased their scores as their ages went up, those with ADHD remained at the same point. ADHD not only interferes in the inferential comprehension of non-literal meanings but also inhibits the development of this skill in subjects affected by it.

  12. Antigenic variation of foot-and-mouth disease virus serotype A

    NARCIS (Netherlands)

    A.B. Ludi (A.); D.L. Horton; Y. Li (Y.); M. Mahapatra (M.); D.P. King (D.); N.J. Knowles (N.); C.A. Russell (Colin); J.H. Paton; J.L.N. Wood; D.J. Smith (Derek James); J.M. Hammond (J.)

    2014-01-01

    textabstractThe current measures to control foot-and-mouth disease (FMD) include vaccination, movement control and slaughter of infected or susceptible animals. One of the difficulties in controlling FMD by vaccination arises due to the substantial diversity found among the seven serotypes of FMD

  13. [Modeling developmental aspects of sensorimotor control of speech production].

    Science.gov (United States)

    Kröger, B J; Birkholz, P; Neuschaefer-Rube, C

    2007-05-01

    Detailed knowledge of the neurophysiology of speech acquisition is important for understanding the developmental aspects of speech perception and production and for understanding developmental disorders of speech perception and production. A computer implemented neural model of sensorimotor control of speech production was developed. The model is capable of demonstrating the neural functions of different cortical areas during speech production in detail. (i) Two sensory and two motor maps or neural representations and the appertaining neural mappings or projections establish the sensorimotor feedback control system. These maps and mappings are already formed and trained during the prelinguistic phase of speech acquisition. (ii) The feedforward sensorimotor control system comprises the lexical map (representations of sounds, syllables, and words of the first language) and the mappings from lexical to sensory and to motor maps. The training of the appertaining mappings form the linguistic phase of speech acquisition. (iii) Three prelinguistic learning phases--i. e. silent mouthing, quasi stationary vocalic articulation, and realisation of articulatory protogestures--can be defined on the basis of our simulation studies using the computational neural model. These learning phases can be associated with temporal phases of prelinguistic speech acquisition obtained from natural data. The neural model illuminates the detailed function of specific cortical areas during speech production. In particular it can be shown that developmental disorders of speech production may result from a delayed or incorrect process within one of the prelinguistic learning phases defined by the neural model.

  14. Physiological Indices of Bilingualism: Oral–Motor Coordination and Speech Rate in Bengali–English Speakers

    Science.gov (United States)

    Chakraborty, Rahul; Goffman, Lisa; Smith, Anne

    2009-01-01

    Purpose To examine how age of immersion and proficiency in a 2nd language influence speech movement variability and speaking rate in both a 1st language and a 2nd language. Method A group of 21 Bengali–English bilingual speakers participated. Lip and jaw movements were recorded. For all 21 speakers, lip movement variability was assessed based on productions of Bengali (L1; 1st language) and English (L2; 2nd language) sentences. For analyses related to the influence of L2 proficiency on speech production processes, participants were sorted into low- (n = 7) and high-proficiency (n = 7) groups. Lip movement variability and speech rate were evaluated for both of these groups across L1 and L2 sentences. Results Surprisingly, adult bilingual speakers produced equally consistent speech movement patterns in their production of L1 and L2. When groups were sorted according to proficiency, highly proficient speakers were marginally more variable in their L1. In addition, there were some phoneme-specific effects, most markedly that segments not shared by both languages were treated differently in production. Consistent with previous studies, movement durations were longer for less proficient speakers in both L1 and L2. Interpretation In contrast to those of child learners, the speech motor systems of adult L2 speakers show a high degree of consistency. Such lack of variability presumably contributes to protracted difficulties with acquiring nativelike pronunciation in L2. The proficiency results suggest bidirectional interactions across L1 and L2, which is consistent with hypotheses regarding interference and the sharing of phonological space. A slower speech rate in less proficient speakers implies that there are increased task demands on speech production processes. PMID:18367680

  15. Speech and non-speech processing in children with phonological disorders: an electrophysiological study

    Directory of Open Access Journals (Sweden)

    Isabela Crivellaro Gonçalves

    2011-01-01

    Full Text Available OBJECTIVE: To determine whether neurophysiological auditory brainstem responses to clicks and repeated speech stimuli differ between typically developing children and children with phonological disorders. INTRODUCTION: Phonological disorders are language impairments resulting from inadequate use of adult phonological language rules and are among the most common speech and language disorders in children (prevalence: 8 - 9%. Our hypothesis is that children with phonological disorders have basic differences in the way that their brains encode acoustic signals at brainstem level when compared to normal counterparts. METHODS: We recorded click and speech evoked auditory brainstem responses in 18 typically developing children (control group and in 18 children who were clinically diagnosed with phonological disorders (research group. The age range of the children was from 7-11 years. RESULTS: The research group exhibited significantly longer latency responses to click stimuli (waves I, III and V and speech stimuli (waves V and A when compared to the control group. DISCUSSION: These results suggest that the abnormal encoding of speech sounds may be a biological marker of phonological disorders. However, these results cannot define the biological origins of phonological problems. We also observed that speech-evoked auditory brainstem responses had a higher specificity/sensitivity for identifying phonological disorders than click-evoked auditory brainstem responses. CONCLUSIONS: Early stages of the auditory pathway processing of an acoustic stimulus are not similar in typically developing children and those with phonological disorders. These findings suggest that there are brainstem auditory pathway abnormalities in children with phonological disorders.

  16. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...... on this model. The basic model used in this thesis is the harmonic model which is a commonly used model for describing the voiced part of the speech signal. We show that it can be beneficial to extend the model to take inharmonicities or the non-stationarity of speech into account. Extending the model...

  17. Acceptable noise level (ANL) with Danish and non-semantic speech materials in adult hearing-aid users

    DEFF Research Database (Denmark)

    Olsen, Steen Østergaard; Lantz, Johannes; Nielsen, Lars Holme

    2012-01-01

    The acceptable noise level (ANL) test is used for quantification of the amount of background noise subjects accept when listening to speech. This study investigates Danish hearing-aid users' ANL performance using Danish and non-semantic speech signals, the repeatability of ANL, and the association...... between ANL and outcome of the international outcome inventory for hearing aids (IOI-HA)....

  18. Predicting Speech Intelligibility Decline in Amyotrophic Lateral Sclerosis Based on the Deterioration of Individual Speech Subsystems

    Science.gov (United States)

    Yunusova, Yana; Wang, Jun; Zinman, Lorne; Pattee, Gary L.; Berry, James D.; Perry, Bridget; Green, Jordan R.

    2016-01-01

    Purpose To determine the mechanisms of speech intelligibility impairment due to neurologic impairments, intelligibility decline was modeled as a function of co-occurring changes in the articulatory, resonatory, phonatory, and respiratory subsystems. Method Sixty-six individuals diagnosed with amyotrophic lateral sclerosis (ALS) were studied longitudinally. The disease-related changes in articulatory, resonatory, phonatory, and respiratory subsystems were quantified using multiple instrumental measures, which were subjected to a principal component analysis and mixed effects models to derive a set of speech subsystem predictors. A stepwise approach was used to select the best set of subsystem predictors to model the overall decline in intelligibility. Results Intelligibility was modeled as a function of five predictors that corresponded to velocities of lip and jaw movements (articulatory), number of syllable repetitions in the alternating motion rate task (articulatory), nasal airflow (resonatory), maximum fundamental frequency (phonatory), and speech pauses (respiratory). The model accounted for 95.6% of the variance in intelligibility, among which the articulatory predictors showed the most substantial independent contribution (57.7%). Conclusion Articulatory impairments characterized by reduced velocities of lip and jaw movements and resonatory impairments characterized by increased nasal airflow served as the subsystem predictors of the longitudinal decline of speech intelligibility in ALS. Declines in maximum performance tasks such as the alternating motion rate preceded declines in intelligibility, thus serving as early predictors of bulbar dysfunction. Following the rapid decline in speech intelligibility, a precipitous decline in maximum performance tasks subsequently occurred. PMID:27148967

  19. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  20. Distribution of cow-calf producers' beliefs regarding gathering and holding their cattle and observing animal movement restrictions during an outbreak of foot-and-mouth disease.

    Science.gov (United States)

    Delgado, Amy H; Norby, Bo; Scott, H Morgan; Dean, Wesley; McIntosh, W Alex; Bush, Eric

    2014-12-01

    The voluntary cooperation of producers with disease control measures such as movement restrictions and gathering cattle for testing, vaccination, or depopulation is critical to the success of many disease control programs. A cross-sectional survey was conducted in Texas in order to determine the distribution of key beliefs about obeying movement restrictions and gathering and holding cattle for disease control purposes. Two questionnaires were developed and distributed to separate representative samples of Texas cow-calf producers, respectively. The context for each behavior was provided through the use of scenarios in the questionnaire. Belief strength was measured using a 7-point Likert-like scale. Producers surveyed were unsure about the possible negative consequences of gathering and holding their cattle when requested by authorities, suggesting a key need for communication in this area during an outbreak. Respondents identified a lack of manpower and/or financial resources to gather and hold cattle as barriers to their cooperation with orders to gather and hold cattle. Producers also expressed uncertainty about the efficacy of movement restrictions to prevent the spread of foot-and-mouth disease and concern about possible feed shortages or animal suffering. However, there are emotional benefits to complying with movement restrictions and strong social expectations of cooperation with any movement bans put in place. Published by Elsevier B.V.

  1. Comparing speech and nonspeech context effects across timescales in coarticulatory contexts.

    Science.gov (United States)

    Viswanathan, Navin; Kelty-Stephen, Damian G

    2018-02-01

    Context effects are ubiquitous in speech perception and reflect the ability of human listeners to successfully perceive highly variable speech signals. In the study of how listeners compensate for coarticulatory variability, past studies have used similar effects speech and tone analogues of speech as strong support for speech-neutral, general auditory mechanisms for compensation for coarticulation. In this manuscript, we revisit compensation for coarticulation by replacing standard button-press responses with mouse-tracking responses and examining both standard geometric measures of uncertainty as well as newer information-theoretic measures that separate fast from slow mouse movements. We found that when our analyses were restricted to end-state responses, tones and speech contexts appeared to produce similar effects. However, a more detailed time-course analysis revealed systematic differences between speech and tone contexts such that listeners' responses to speech contexts, but not to tone contexts, changed across the experimental session. Analyses of the time course of effects within trials using mouse tracking indicated that speech contexts elicited fewer x-position flips but more area under the curve (AUC) and maximum deviation (MD), and they did so in the slower portions of mouse-tracking movements. Our results indicate critical differences between the time course of speech and nonspeech context effects and that general auditory explanations, motivated by their apparent similarity, be reexamined.

  2. Laban movement analysis to classify emotions from motion

    Science.gov (United States)

    Dewan, Swati; Agarwal, Shubham; Singh, Navjyoti

    2018-04-01

    In this paper, we present the study of Laban Movement Analysis (LMA) to understand basic human emotions from nonverbal human behaviors. While there are a lot of studies on understanding behavioral patterns based on natural language processing and speech processing applications, understanding emotions or behavior from non-verbal human motion is still a very challenging and unexplored field. LMA provides a rich overview of the scope of movement possibilities. These basic elements can be used for generating movement or for describing movement. They provide an inroad to understanding movement and for developing movement efficiency and expressiveness. Each human being combines these movement factors in his/her own unique way and organizes them to create phrases and relationships which reveal personal, artistic, or cultural style. In this work, we build a motion descriptor based on a deep understanding of Laban theory. The proposed descriptor builds up on previous works and encodes experiential features by using temporal windows. We present a more conceptually elaborate formulation of Laban theory and test it in a relatively new domain of behavioral research with applications in human-machine interaction. The recognition of affective human communication may be used to provide developers with a rich source of information for creating systems that are capable of interacting well with humans. We test our algorithm on UCLIC dataset which consists of body motions of 13 non-professional actors portraying angry, fear, happy and sad emotions. We achieve an accuracy of 87.30% on this dataset.

  3. Control and prediction components of movement planning in stuttering vs. nonstuttering adults

    Science.gov (United States)

    Daliri, Ayoub; Prokopenko, Roman A.; Flanagan, J. Randall; Max, Ludo

    2014-01-01

    Purpose Stuttering individuals show speech and nonspeech sensorimotor deficiencies. To perform accurate movements, the sensorimotor system needs to generate appropriate control signals and correctly predict their sensory consequences. Using a reaching task, we examined the integrity of these control and prediction components, separately, for movements unrelated to the speech motor system. Method Nine stuttering and nine nonstuttering adults made fast reaching movements to visual targets while sliding an object under the index finger. To quantify control, we determined initial direction error and end-point error. To quantify prediction, we calculated the correlation between vertical and horizontal forces applied to the object—an index of how well vertical force (preventing slip) anticipated direction-dependent variations in horizontal force (moving the object). Results Directional and end-point error were significantly larger for the stuttering group. Both groups performed similarly in scaling vertical force with horizontal force. Conclusions The stuttering group's reduced reaching accuracy suggests limitations in generating control signals for voluntary movements, even for non-orofacial effectors. Typical scaling of vertical force with horizontal force suggests an intact ability to predict the consequences of planned control signals. Stuttering may be associated with generalized deficiencies in planning control signals rather than predicting the consequences of those signals. PMID:25203459

  4. Muslim and Non-Muslim Adolescents’ Reasoning About Freedom of Speech and Minority Rights

    NARCIS (Netherlands)

    Verkuyten, Maykel; Slooter, Luuk

    2008-01-01

    An experimental questionnaire study, conducted in the Netherlands, examined adolescents’ reasoning about freedom of speech and minority rights. Muslim minority and non-Muslim majority adolescents (12 – 18 years) made judgments of different types of behaviors and different contexts. The group

  5. Musical and linguistic expertise influence pre-attentive and attentive processing of non-speech sounds.

    Science.gov (United States)

    Marie, Céline; Kujala, Teija; Besson, Mireille

    2012-04-01

    The aim of this experiment was two-fold. Our first goal was to determine whether linguistic expertise influences the pre-attentive [as reflected by the Mismatch Negativity - (MMN)] and the attentive processing (as reflected by behavioural discrimination accuracy) of non-speech, harmonic sounds. The second was to directly compare the effects of linguistic and musical expertise. To this end, we compared non-musician native speakers of a quantity language, Finnish, in which duration is a phonemically contrastive cue, with French musicians and French non-musicians. Results revealed that pre-attentive and attentive processing of duration deviants was enhanced in Finn non-musicians and French musicians compared to French non-musicians. By contrast, MMN in French musicians was larger than in both Finns and French non-musicians for frequency deviants, whereas no between-group differences were found for intensity deviants. By showing similar effects of linguistic and musical expertise, these results argue in favor of common processing of duration in music and speech. Copyright © 2010 Elsevier Srl. All rights reserved.

  6. Image quality in non-gated versus gated reconstruction of tongue motion using magnetic resonance imaging: a comparison using automated image processing

    Energy Technology Data Exchange (ETDEWEB)

    Alvey, Christopher; Orphanidou, C.; Coleman, J.; McIntyre, A.; Golding, S.; Kochanski, G. [University of Oxford, Oxford (United Kingdom)

    2008-11-15

    The use of gated or ECG triggered MR is a well-established technique and developments in coil technology have enabled this approach to be applied to areas other than the heart. However, the image quality of gated (ECG or cine) versus non-gated or real-time has not been extensively evaluated in the mouth. We evaluate two image sequences by developing an automatic image processing technique which compares how well the image represents known anatomy. Four subjects practised experimental poly-syllabic sentences prior to MR scanning. Using a 1.5 T MR unit, we acquired comparable gated (using an artificial trigger) and non-gated sagittal images during speech. We then used an image processing algorithm to model the image grey along lines that cross the airway. Each line involved an eight parameter non-linear equation to model of proton densities, edges, and dimensions. Gated and non-gated images show similar spatial resolution, with non-gated images being slightly sharper (10% better resolution, less than 1 pixel). However, the gated sequences generated images of substantially lower inherent noise, and substantially better discrimination between air and tissue. Additionally, the gated sequences demonstrate a very much greater temporal resolution. Overall, image quality is better with gated imaging techniques, especially given their superior temporal resolution. Gated techniques are limited by the repeatability of the motions involved, and we have shown that speech to a metronome can be sufficiently repeatable to allow high-quality gated magnetic resonance imaging images. We suggest that gated sequences may be useful for evaluating other types of repetitive movement involving the joints and limb motions. (orig.)

  7. Image quality in non-gated versus gated reconstruction of tongue motion using magnetic resonance imaging: a comparison using automated image processing

    International Nuclear Information System (INIS)

    Alvey, Christopher; Orphanidou, C.; Coleman, J.; McIntyre, A.; Golding, S.; Kochanski, G.

    2008-01-01

    The use of gated or ECG triggered MR is a well-established technique and developments in coil technology have enabled this approach to be applied to areas other than the heart. However, the image quality of gated (ECG or cine) versus non-gated or real-time has not been extensively evaluated in the mouth. We evaluate two image sequences by developing an automatic image processing technique which compares how well the image represents known anatomy. Four subjects practised experimental poly-syllabic sentences prior to MR scanning. Using a 1.5 T MR unit, we acquired comparable gated (using an artificial trigger) and non-gated sagittal images during speech. We then used an image processing algorithm to model the image grey along lines that cross the airway. Each line involved an eight parameter non-linear equation to model of proton densities, edges, and dimensions. Gated and non-gated images show similar spatial resolution, with non-gated images being slightly sharper (10% better resolution, less than 1 pixel). However, the gated sequences generated images of substantially lower inherent noise, and substantially better discrimination between air and tissue. Additionally, the gated sequences demonstrate a very much greater temporal resolution. Overall, image quality is better with gated imaging techniques, especially given their superior temporal resolution. Gated techniques are limited by the repeatability of the motions involved, and we have shown that speech to a metronome can be sufficiently repeatable to allow high-quality gated magnetic resonance imaging images. We suggest that gated sequences may be useful for evaluating other types of repetitive movement involving the joints and limb motions. (orig.)

  8. Characterization of authorship speeches in classroom

    Directory of Open Access Journals (Sweden)

    Daniella de Almeida Santos

    2007-08-01

    Full Text Available Our paper intends to discuss how the teacher's speech can interfere in the construction of arguments on the part of the students, when they are involved with the task of solving an experimental problem in sciences classes. Thus, we wanted to understand how teacher and students relate to each other in a discursive movement for the senses structuring of the obtained experimental data. With that concern, our focus is in the processes of the speeches authorship, both students' and teachers', in the episodes in which the actors of the teaching and learning process organize their speeches, mediated by the experimental activity.

  9. 33 CFR 207.270 - Tallahatchie River, Miss., between Batesville and the mouth; logging.

    Science.gov (United States)

    2010-07-01

    ... Tallahatchie River, Miss., between Batesville and the mouth; logging. (a) The floating of “sack”, rafts, or of... sufficient capacity to properly manage the movement of the raft and to keep it from being an obstruction to...

  10. Gender Dependence in Mouth Opening Dimensions in Normal Adult Malaysians Population

    OpenAIRE

    Shaari, Ramizu; Hwa, Teoh Eng; Rahman, Shaifulizan Abdul

    2011-01-01

    While measurement of mouth opening is an important clinica examination in diagnosis and management of oral disease, data on non-Western populations are limited. This study was therefore conducted to determine the range of mouth opening in normal Malaysian male and female adults. A total of 34 dental students of Universiti Sains Malaysia (USM) were chosen randomly and their maximum mouth opening was measured after being asked to open their mouth sufficiently to accommodate three fingers. Measu...

  11. Impairments of speech fluency in Lewy body spectrum disorder.

    Science.gov (United States)

    Ash, Sharon; McMillan, Corey; Gross, Rachel G; Cook, Philip; Gunawardena, Delani; Morgan, Brianna; Boller, Ashley; Siderowf, Andrew; Grossman, Murray

    2012-03-01

    Few studies have examined connected speech in demented and non-demented patients with Parkinson's disease (PD). We assessed the speech production of 35 patients with Lewy body spectrum disorder (LBSD), including non-demented PD patients, patients with PD dementia (PDD), and patients with dementia with Lewy bodies (DLB), in a semi-structured narrative speech sample in order to characterize impairments of speech fluency and to determine the factors contributing to reduced speech fluency in these patients. Both demented and non-demented PD patients exhibited reduced speech fluency, characterized by reduced overall speech rate and long pauses between sentences. Reduced speech rate in LBSD correlated with measures of between-utterance pauses, executive functioning, and grammatical comprehension. Regression analyses related non-fluent speech, grammatical difficulty, and executive difficulty to atrophy in frontal brain regions. These findings indicate that multiple factors contribute to slowed speech in LBSD, and this is mediated in part by disease in frontal brain regions. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  13. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  14. Nonverbal oral apraxia in primary progressive aphasia and apraxia of speech.

    Science.gov (United States)

    Botha, Hugo; Duffy, Joseph R; Strand, Edythe A; Machulda, Mary M; Whitwell, Jennifer L; Josephs, Keith A

    2014-05-13

    The goal of this study was to explore the prevalence of nonverbal oral apraxia (NVOA), its association with other forms of apraxia, and associated imaging findings in patients with primary progressive aphasia (PPA) and progressive apraxia of speech (PAOS). Patients with a degenerative speech or language disorder were prospectively recruited and diagnosed with a subtype of PPA or with PAOS. All patients had comprehensive speech and language examinations. Voxel-based morphometry was performed to determine whether atrophy of a specific region correlated with the presence of NVOA. Eighty-nine patients were identified, of which 34 had PAOS, 9 had agrammatic PPA, 41 had logopenic aphasia, and 5 had semantic dementia. NVOA was very common among patients with PAOS but was found in patients with PPA as well. Several patients exhibited only one of NVOA or apraxia of speech. Among patients with apraxia of speech, the severity of the apraxia of speech was predictive of NVOA, whereas ideomotor apraxia severity was predictive of the presence of NVOA in those without apraxia of speech. Bilateral atrophy of the prefrontal cortex anterior to the premotor area and supplementary motor area was associated with NVOA. Apraxia of speech, NVOA, and ideomotor apraxia are at least partially separable disorders. The association of NVOA and apraxia of speech likely results from the proximity of the area reported here and the premotor area, which has been implicated in apraxia of speech. The association of ideomotor apraxia and NVOA among patients without apraxia of speech could represent disruption of modules shared by nonverbal oral movements and limb movements.

  15. An evaluation of the effectiveness of PROMPT therapy in improving speech production accuracy in six children with cerebral palsy.

    Science.gov (United States)

    Ward, Roslyn; Leitão, Suze; Strauss, Geoff

    2014-08-01

    This study evaluates perceptual changes in speech production accuracy in six children (3-11 years) with moderate-to-severe speech impairment associated with cerebral palsy before, during, and after participation in a motor-speech intervention program (Prompts for Restructuring Oral Muscular Phonetic Targets). An A1BCA2 single subject research design was implemented. Subsequent to the baseline phase (phase A1), phase B targeted each participant's first intervention priority on the PROMPT motor-speech hierarchy. Phase C then targeted one level higher. Weekly speech probes were administered, containing trained and untrained words at the two levels of intervention, plus an additional level that served as a control goal. The speech probes were analysed for motor-speech-movement-parameters and perceptual accuracy. Analysis of the speech probe data showed all participants recorded a statistically significant change. Between phases A1-B and B-C 6/6 and 4/6 participants, respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns targeted during the training of that intervention. The preliminary data presented in this study make a contribution to providing evidence that supports the use of a treatment approach aligned with dynamic systems theory to improve the motor-speech movement patterns and speech production accuracy in children with cerebral palsy.

  16. Primary progressive aphasia and apraxia of speech.

    Science.gov (United States)

    Jung, Youngsin; Duffy, Joseph R; Josephs, Keith A

    2013-09-01

    Primary progressive aphasia is a neurodegenerative syndrome characterized by progressive language dysfunction. The majority of primary progressive aphasia cases can be classified into three subtypes: nonfluent/agrammatic, semantic, and logopenic variants. Each variant presents with unique clinical features, and is associated with distinctive underlying pathology and neuroimaging findings. Unlike primary progressive aphasia, apraxia of speech is a disorder that involves inaccurate production of sounds secondary to impaired planning or programming of speech movements. Primary progressive apraxia of speech is a neurodegenerative form of apraxia of speech, and it should be distinguished from primary progressive aphasia given its discrete clinicopathological presentation. Recently, there have been substantial advances in our understanding of these speech and language disorders. The clinical, neuroimaging, and histopathological features of primary progressive aphasia and apraxia of speech are reviewed in this article. The distinctions among these disorders for accurate diagnosis are increasingly important from a prognostic and therapeutic standpoint. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders.

    Directory of Open Access Journals (Sweden)

    Julia eIrwin

    2014-05-01

    Full Text Available Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD and those with typical development (TD. Patterns of gaze were observed under three conditions: audiovisual (AV speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers’ mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.

  18. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  19. Inter-day Reliability of the IDEEA Activity Monitor for Measuring Movement and Non-Movement Behaviors in Older Adults.

    Science.gov (United States)

    de la Cámara, Miguel Ángel; Higueras-Fresnillo, Sara; Martinez-Gomez, David; Veiga, Oscar L

    2018-05-29

    The inter-day reliability of the Intelligent Device for Energy Expenditure and Activity (IDEEA) has not been studied to date. The study purpose was to examine the inter-day variability and reliability on two consecutive days collected with the IDEEA, as well as to predict the number of days needed to provide a reliable estimate of several movement (walking and climbing stairs) and non-movement behaviors (lying, reclining, sitting) and standing in older adults. The sample included 126 older adults (74 women) who wore the IDEEA for 48-h. Results showed low variability between the two days and its reliability was from moderate (ICC=0.34) to high (ICC=0.80) in most of movement and non-movement behaviors analyzed. The Bland-Altman plots showed a high-moderate agreement between days and the Spearman-Brown formula estimated ranged from 1.2 and 9.1 days of monitoring with the IDEEA are needed to achieve ICCs≥0.70 in older adults for sitting and climbing stairs, respectively.

  20. Muslim and Non-Muslim Adolescents' Reasoning about Freedom of Speech and Minority Rights

    Science.gov (United States)

    Verkuyten, Maykel; Slooter, Luuk

    2008-01-01

    An experimental questionnaire study, conducted in the Netherlands, examined adolescents' reasoning about freedom of speech and minority rights. Muslim minority and non-Muslim majority adolescents (12-18 years) made judgments of different types of behaviors and different contexts. The group membership of participants had a clear effect. Muslim…

  1. Vowel Generation for Children with Cerebral Palsy using Myocontrol of a Speech Synthesizer

    Directory of Open Access Journals (Sweden)

    Chuanxin M Niu

    2015-01-01

    Full Text Available For children with severe cerebral palsy (CP, social and emotional interactions can be significantly limited due to impaired speech motor function. However, if it is possible to extract continuous voluntary control signals from the electromyograph (EMG of limb muscles, then EMG may be used to drive the synthesis of intelligible speech with controllable speed, intonation and articulation. We report an important first step: the feasibility of controlling a vowel synthesizer using non-speech muscles. A classic formant-based speech synthesizer is adapted to allow the lowest two formants to be controlled by surface EMG from skeletal muscles. EMG signals are filtered using a non-linear Bayesian filtering algorithm that provides the high bandwidth and accuracy required for speech tasks. The frequencies of the first two formants determine points in a 2D plane, and vowels are targets on this plane. We focus on testing the overall feasibility of producing intelligible English vowels with myocontrol using two straightforward EMG-formant mappings. More mappings can be tested in the future to optimize the intelligibility. Vowel generation was tested on 10 healthy adults and 4 patients with dyskinetic CP. Five English vowels were generated by subjects in pseudo-random order, after only 10 minutes of device familiarization. The fraction of vowels correctly identified by 4 naive listeners exceeded 80% for the vowels generated by healthy adults and 57% for vowels generated by patients with CP. Our goal is a continuous virtual voice with personalized intonation and articulation that will restore not only the intellectual content but also the social and emotional content of speech for children and adults with severe movement disorders.

  2. Asymmetry in infants' selective attention to facial features during visual processing of infant-directed speech

    OpenAIRE

    Smith, Nicholas A.; Gibilisco, Colleen R.; Meisinger, Rachel E.; Hankey, Maren

    2013-01-01

    Two experiments used eye tracking to examine how infant and adult observers distribute their eye gaze on videos of a mother producing infant- and adult-directed speech. Both groups showed greater attention to the eyes than to the nose and mouth, as well as an asymmetrical focus on the talker’s right eye for infant-directed speech stimuli. Observers continued to look more at the talker’s apparent right eye when the video stimuli were mirror flipped, suggesting that the asymmetry reflects a per...

  3. Speech–Language Pathology Evaluation and Management of Hyperkinetic Disorders Affecting Speech and Swallowing Function

    Science.gov (United States)

    Barkmeier-Kraemer, Julie M.; Clark, Heather M.

    2017-01-01

    Background Hyperkinetic dysarthria is characterized by abnormal involuntary movements affecting respiratory, phonatory, and articulatory structures impacting speech and deglutition. Speech–language pathologists (SLPs) play an important role in the evaluation and management of dysarthria and dysphagia. This review describes the standard clinical evaluation and treatment approaches by SLPs for addressing impaired speech and deglutition in specific hyperkinetic dysarthria populations. Methods A literature review was conducted using the data sources of PubMed, Cochrane Library, and Google Scholar. Search terms included 1) hyperkinetic dysarthria, essential voice tremor, voice tremor, vocal tremor, spasmodic dysphonia, spastic dysphonia, oromandibular dystonia, Meige syndrome, orofacial, cervical dystonia, dystonia, dyskinesia, chorea, Huntington’s Disease, myoclonus; and evaluation/treatment terms: 2) Speech–Language Pathology, Speech Pathology, Evaluation, Assessment, Dysphagia, Swallowing, Treatment, Management, and diagnosis. Results The standard SLP clinical speech and swallowing evaluation of chorea/Huntington’s disease, myoclonus, focal and segmental dystonia, and essential vocal tremor typically includes 1) case history; 2) examination of the tone, symmetry, and sensorimotor function of the speech structures during non-speech, speech and swallowing relevant activities (i.e., cranial nerve assessment); 3) evaluation of speech characteristics; and 4) patient self-report of the impact of their disorder on activities of daily living. SLP management of individuals with hyperkinetic dysarthria includes behavioral and compensatory strategies for addressing compromised speech and intelligibility. Swallowing disorders are managed based on individual symptoms and the underlying pathophysiology determined during evaluation. Discussion SLPs play an important role in contributing to the differential diagnosis and management of impaired speech and deglutition

  4. Speech, language and swallowing in Huntington’ Disease

    Directory of Open Access Journals (Sweden)

    Maryluz Camargo-Mendoza

    2017-04-01

    Full Text Available Huntington’s disease (HD has been described as a genetic condition caused by a mutation in the CAG (cytosine-adenine-guanine nucleotide sequence. Depending on the stage of the disease, people may have difficulties in speech, language and swallowing. The purpose of this paper is to describe these difficulties in detail, as well as to provide an account on speech and language therapy approach to this condition. Regarding speech, it is worth noticing that characteristics typical of hyperkinetic dysarthria can be found due to underlying choreic movements. The speech of people with HD tends to show shorter sentences, with much simpler syntactic structures, and difficulties in tasks that require complex cognitive processing. Moreover, swallowing may present dysphagia that progresses as the disease develops. A timely, comprehensive and effective speech-language intervention is essential to improve the quality of life of people and contribute to their communicative welfare.

  5. Designing acoustics for linguistically diverse classrooms: Effects of background noise, reverberation and talker foreign accent on speech comprehension by native and non-native English-speaking listeners

    Science.gov (United States)

    Peng, Zhao Ellen

    The current classroom acoustics standard (ANSI S12.60-2010) recommends core learning spaces not to exceed background noise level (BNL) of 35 dBA and reverberation time (RT) of 0.6 second, based on speech intelligibility performance mainly by the native English-speaking population. Existing literature has not correlated these recommended values well with student learning outcomes. With a growing population of non-native English speakers in American classrooms, the special needs for perceiving degraded speech among non-native listeners, either due to realistic room acoustics or talker foreign accent, have not been addressed in the current standard. This research seeks to investigate the effects of BNL and RT on the comprehension of English speech from native English and native Mandarin Chinese talkers as perceived by native and non-native English listeners, and to provide acoustic design guidelines to supplement the existing standard. This dissertation presents two studies on the effects of RT and BNL on more realistic classroom learning experiences. How do native and non-native English-speaking listeners perform on speech comprehension tasks under adverse acoustic conditions, if the English speech is produced by talkers of native English (Study 1) versus native Mandarin Chinese (Study 2)? Speech comprehension materials were played back in a listening chamber to individual listeners: native and non-native English-speaking in Study 1; native English, native Mandarin Chinese, and other non-native English-speaking in Study 2. Each listener was screened for baseline English proficiency level, and completed dual tasks simultaneously involving speech comprehension and adaptive dot-tracing under 15 acoustic conditions, comprised of three BNL conditions (RC-30, 40, and 50) and five RT scenarios (0.4 to 1.2 seconds). The results show that BNL and RT negatively affect both objective performance and subjective perception of speech comprehension, more severely for non

  6. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Real-time continuous visual biofeedback in the treatment of speech breathing disorders following childhood traumatic brain injury: report of one case.

    Science.gov (United States)

    Murdoch, B E; Pitt, G; Theodoros, D G; Ward, E C

    1999-01-01

    The efficacy of traditional and physiological biofeedback methods for modifying abnormal speech breathing patterns was investigated in a child with persistent dysarthria following severe traumatic brain injury (TBI). An A-B-A-B single-subject experimental research design was utilized to provide the subject with two exclusive periods of therapy for speech breathing, based on traditional therapy techniques and physiological biofeedback methods, respectively. Traditional therapy techniques included establishing optimal posture for speech breathing, explanation of the movement of the respiratory muscles, and a hierarchy of non-speech and speech tasks focusing on establishing an appropriate level of sub-glottal air pressure, and improving the subject's control of inhalation and exhalation. The biofeedback phase of therapy utilized variable inductance plethysmography (or Respitrace) to provide real-time, continuous visual biofeedback of ribcage circumference during breathing. As in traditional therapy, a hierarchy of non-speech and speech tasks were devised to improve the subject's control of his respiratory pattern. Throughout the project, the subject's respiratory support for speech was assessed both instrumentally and perceptually. Instrumental assessment included kinematic and spirometric measures, and perceptual assessment included the Frenchay Dysarthria Assessment, Assessment of Intelligibility of Dysarthric Speech, and analysis of a speech sample. The results of the study demonstrated that real-time continuous visual biofeedback techniques for modifying speech breathing patterns were not only effective, but superior to the traditional therapy techniques for modifying abnormal speech breathing patterns in a child with persistent dysarthria following severe TBI. These results show that physiological biofeedback techniques are potentially useful clinical tools for the remediation of speech breathing impairment in the paediatric dysarthric population.

  8. Suppression of the µ rhythm during speech and non-speech discrimination revealed by independent component analysis: implications for sensorimotor integration in speech processing.

    Science.gov (United States)

    Bowers, Andrew; Saltuklaroglu, Tim; Harkrider, Ashley; Cuellar, Megan

    2013-01-01

    Constructivist theories propose that articulatory hypotheses about incoming phonetic targets may function to enhance perception by limiting the possibilities for sensory analysis. To provide evidence for this proposal, it is necessary to map ongoing, high-temporal resolution changes in sensorimotor activity (i.e., the sensorimotor μ rhythm) to accurate speech and non-speech discrimination performance (i.e., correct trials.). Sixteen participants (15 female and 1 male) were asked to passively listen to or actively identify speech and tone-sweeps in a two-force choice discrimination task while the electroencephalograph (EEG) was recorded from 32 channels. The stimuli were presented at signal-to-noise ratios (SNRs) in which discrimination accuracy was high (i.e., 80-100%) and low SNRs producing discrimination performance at chance. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. ICA revealed left and right sensorimotor µ components for 14/16 and 13/16 participants respectively that were identified on the basis of scalp topography, spectral peaks, and localization to the precentral and postcentral gyri. Time-frequency analysis of left and right lateralized µ component clusters revealed significant (pFDRspeech discrimination trials relative to chance trials following stimulus offset. Findings are consistent with constructivist, internal model theories proposing that early forward motor models generate predictions about likely phonemic units that are then synthesized with incoming sensory cues during active as opposed to passive processing. Future directions and possible translational value for clinical populations in which sensorimotor integration may play a functional role are discussed.

  9. Bilateral, posterior parietal polymicrogyria as part of speech therapy ...

    African Journals Online (AJOL)

    in abnormal development of the deep layers of the cerebral cortex and production ... focal, unilateral, bilateral or asymmetrical, and have been described in all areas of .... did not recognise food in the mouth, no tongue movement was observed.

  10. Barack Obama’s pauses and gestures in humorous speeches

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2017-01-01

    The main aim of this paper is to investigate speech pauses and gestures as means to engage the audience and present the humorous message in an effective way. The data consist of two speeches by the USA president Barack Obama at the 2011 and 2016 Annual White House Correspondents’ Association Dinner...... produced significantly more hand gestures in 2016 than in 2011. An analysis of the hand gestures produced by Barack Obama in two political speeches held at the United Nations in 2011 and 2016 confirms that the president produced significantly less communicative co-speech hand gestures during his speeches...... and they emphasise the speech segment which they follow or precede. We also found a highly significant correlation between Obama’s speech pauses and audience response. Obama produces numerous head movements, facial expressions and hand gestures and their functions are related to both discourse content and structure...

  11. High gamma oscillations in medial temporal lobe during overt production of speech and gestures.

    Science.gov (United States)

    Marstaller, Lars; Burianová, Hana; Sowman, Paul F

    2014-01-01

    The study of the production of co-speech gestures (CSGs), i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15-25 Hz), which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50-90 Hz), which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.

  12. High gamma oscillations in medial temporal lobe during overt production of speech and gestures.

    Directory of Open Access Journals (Sweden)

    Lars Marstaller

    Full Text Available The study of the production of co-speech gestures (CSGs, i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15-25 Hz, which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50-90 Hz, which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.

  13. Movement of the external ear in human embryo.

    Science.gov (United States)

    Kagurasho, Miho; Yamada, Shigehito; Uwabe, Chigako; Kose, Katsumi; Takakuwa, Tetsuya

    2012-02-01

    External ears, one of the major face components, show an interesting movement during craniofacial morphogenesis in human embryo. The present study was performed to see if movement of the external ears in a human embryo could be explained by differential growth. In all, 171 samples between Carnegie stage (CS) 17 and CS 23 were selected from MR image datasets of human embryos obtained from the Kyoto Collection of Human Embryos. The three-dimensional absolute position of 13 representative anatomical landmarks, including external and internal ears, from MRI data was traced to evaluate the movement between the different stages with identical magnification. Two different sets of reference axes were selected for evaluation and comparison of the movements. When the pituitary gland and the first cervical vertebra were selected as a reference axis, the 13 anatomical landmarks of the face spread out within the same region as the embryo enlarged and changed shape. The external ear did move mainly laterally, but not cranially. The distance between the external and internal ear stayed approximately constant. Three-dimensionally, the external ear located in the caudal ventral parts of the internal ear in CS 17, moved mainly laterally until CS 23. When surface landmarks eyes and mouth were selected as a reference axis, external ears moved from the caudal lateral ventral region to the position between eyes and mouth during development. The results indicate that movement of all anatomical landmarks, including external and internal ears, can be explained by differential growth. Also, when the external ear is recognized as one of the facial landmarks and having a relative position to other landmarks such as the eyes and mouth, the external ears seem to move cranially. © 2012 Kagurasho et al; licensee BioMed Central Ltd.

  14. Dancers Entrain More Effectively than Non-Dancers to Another Actor’s Movements

    Directory of Open Access Journals (Sweden)

    Auriel eWashburn

    2014-10-01

    Full Text Available For many everyday sensorimotor tasks, trained dancers have been found to exhibit distinct and sometimes superior (more stable or robust patterns of behavior compared to non-dancers. Past research has demonstrated that experts in fields requiring specialized physical training and behavioral control exhibit superior interpersonal coordination capabilities for expertise-related tasks. To date, however, no published studies have compared dancers’ abilities to coordinate their movements with the movements of another individual—i.e., during a so-called visual-motor interpersonal coordination task. The current study was designed to investigate whether trained dancers would be better able to coordinate with a partner performing short sequences of dance-like movements than non-dancers. Movement time series were recorded for individual dancers and non-dancers asked to synchronize with a confederate during three different movement sequences characterized by distinct dance styles (i.e., dance team routine, contemporary ballet, mixed style without hearing any auditory signals or music. A diverse range of linear and nonlinear analyses (i.e., Cross-correlation, Cross-Recurrence Quantification Analysis (CRQA, and Cross-Wavelet analysis provided converging measures of coordination across multiple time scales. While overall levels of interpersonal coordination were influenced by differences in movement sequence for both groups, dancers consistently displayed higher levels of coordination with the confederate at both short and long time scales. These findings demonstrate that the visual-motor coordination capabilities of trained dancers allow them to better synchronize with other individuals performing dance-like movements than non-dancers. Further investigation of similar tasks may help to increase the understanding of visual-motor entrainment in general, as well as provide insight into the effects of focused training on visual-motor and interpersonal

  15. The Interaction of Lexical Characteristics and Speech Production in Parkinson's Disease

    Science.gov (United States)

    Chiu, Yi-Fang; Forrest, Karen

    2017-01-01

    Purpose: This study sought to investigate the interaction of speech movement execution with higher order lexical parameters. The authors examined how lexical characteristics affect speech output in individuals with Parkinson's disease (PD) and healthy control (HC) speakers. Method: Twenty speakers with PD and 12 healthy speakers read sentences…

  16. Non-linear Dynamics of Speech in Schizophrenia

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Simonsen, Arndis; Weed, Ethan

    (regularity and complexity) of speech. Our aims are (1) to achieve a more fine-grained understanding of the speech patterns in schizophrenia than has previously been achieved using traditional, linear measures of prosody and fluency, and (2) to employ the results in a supervised machine-learning process......-effects inference. SANS and SAPS scores were predicted using a 10-fold cross-validated multiple linear regression. Both analyses were iterated 1000 to test for stability of results. Results: Voice dynamics allowed discrimination of patients with schizophrenia from healthy controls with a balanced accuracy of 85...

  17. Burning Mouth Syndrome

    Science.gov (United States)

    ... Care Home Health Info Health Topics Burning Mouth Burning Mouth Syndrome (BMS) is a painful, complex condition often described ... or other symptoms. Read More Publications Cover image Burning Mouth Syndrome Publication files Download Language English PDF — Number of ...

  18. Coupling dynamics in speech gestures: amplitude and rate influences.

    Science.gov (United States)

    van Lieshout, Pascal H H M

    2017-08-01

    Speech is a complex oral motor function that involves multiple articulators that need to be coordinated in space and time at relatively high movement speeds. How this is accomplished remains an important and largely unresolved empirical question. From a coordination dynamics perspective, coordination involves the assembly of coordinative units that are characterized by inherently stable coupling patterns that act as attractor states for task-specific actions. In the motor control literature, one particular model formulated by Haken et al. (Biol Cybern 51(5):347-356, 1985) or HKB has received considerable attention in the way it can account for changes in the nature and stability of specific coordination patterns between limbs or between limbs and external stimuli. In this model (and related versions), movement amplitude is considered a critical factor in the formation of these patterns. Several studies have demonstrated its role for bimanual coordination and similar types of tasks, but for speech motor control such studies are lacking. The current study describes a systematic approach to evaluate the impact of movement amplitude and movement duration on coordination stability in the production of bilabial and tongue body gestures for specific vowel-consonant-vowel strings. The vowel combinations that were used induced a natural contrast in movement amplitude at three speaking rate conditions (slow, habitual, fast). Data were collected on ten young adults using electromagnetic articulography, recording movement data from lips and tongue with high temporal and spatial precision. The results showed that with small movement amplitudes there is a decrease in coordination stability, independent from movement duration. These findings were found to be robust across all individuals and are interpreted as further evidence that principles of coupling dynamics operate in the oral motor control system similar to other motor systems and can be explained in terms of coupling

  19. Numerical simulation of sediment transport from Ba Lat Mouth and the process of coastal morphology

    International Nuclear Information System (INIS)

    Chung, Dang Huu

    2008-01-01

    This paper presents an application of a 3D numerical model to simulate one vertical layer sediment transport and coastal morphodynamical process for the Hai Hau coastal area located in the north of Vietnam, where a very large amount of suspended sediment is carried into the sea from Ba Lat Mouth every year. Four simulations are based on the real data of waves supplied by the observation station close to Ba Lat Mouth. The conditions of wind and suspended sand concentration at Ba Lat Mouth are basically assumed from practice. The computed results show that the hydrodynamic factors strongly depend on the wind condition and these factors govern the direction and the range of suspended sand transport, especially in the shallow-water region. In the deep-water region this influence is not really clear when the wind force is not strong enough to modify the tidal current. In the area close to Ba Lat Mouth the flow velocity is very large with the maximum flood flow about 2.6 m s −1 and the maximum ebb flow about 1 m s −1 at the mouth, and this is one of the reasons for strong erosion. In the case of tidal flow only, the suspended sand concentration decreases resulting in local deposition. Therefore, the area influenced by suspended transport is small, about 12 km from the mouth. In the condition of wind and waves, the suspended sand transport reaches the end of the computation area within a few days, especially the cases with wind from the north-east-north. Through these simulation results, a common tendency of sediment movement from the north to the south is specified for the Hai Hau coastal area. In addition, the results also show that the coast suffers from strong erosion, especially the region near Ba Lat Mouth. From the simulation results it can be seen that the movement of the Red River sand along the Vietnamese coast is quite possible, which is an answer to a long-standing question. Furthermore, although the suspended sediment concentration is quite large, it is

  20. Atypical lateralization of ERP response to native and non-native speech in infants at risk for autism spectrum disorder.

    Science.gov (United States)

    Seery, Anne M; Vogel-Farley, Vanessa; Tager-Flusberg, Helen; Nelson, Charles A

    2013-07-01

    Language impairment is common in autism spectrum disorders (ASD) and is often accompanied by atypical neural lateralization. However, it is unclear when in development language impairment or atypical lateralization first emerges. To address these questions, we recorded event-related-potentials (ERPs) to native and non-native speech contrasts longitudinally in infants at risk for ASD (HRA) over the first year of life to determine whether atypical lateralization is present as an endophenotype early in development and whether these infants show delay in a very basic precursor of language acquisition: phonemic perceptual narrowing. ERP response for the HRA group to a non-native speech contrast revealed a trajectory of perceptual narrowing similar to a group of low-risk controls (LRC), suggesting that phonemic perceptual narrowing does not appear to be delayed in these high-risk infants. In contrast there were significant group differences in the development of lateralized ERP response to speech: between 6 and 12 months the LRC group displayed a lateralized response to the speech sounds, while the HRA group failed to display this pattern. We suggest the possibility that atypical lateralization to speech may be an ASD endophenotype over the first year of life. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Morphological brain differences between adult stutterers and non-stutterers

    Directory of Open Access Journals (Sweden)

    Hänggi Jürgen

    2004-12-01

    Full Text Available Abstract Background The neurophysiological and neuroanatomical foundations of persistent developmental stuttering (PDS are still a matter of dispute. A main argument is that stutterers show atypical anatomical asymmetries of speech-relevant brain areas, which possibly affect speech fluency. The major aim of this study was to determine whether adults with PDS have anomalous anatomy in cortical speech-language areas. Methods Adults with PDS (n = 10 and controls (n = 10 matched for age, sex, hand preference, and education were studied using high-resolution MRI scans. Using a new variant of the voxel-based morphometry technique (augmented VBM the brains of stutterers and non-stutterers were compared with respect to white matter (WM and grey matter (GM differences. Results We found increased WM volumes in a right-hemispheric network comprising the superior temporal gyrus (including the planum temporale, the inferior frontal gyrus (including the pars triangularis, the precentral gyrus in the vicinity of the face and mouth representation, and the anterior middle frontal gyrus. In addition, we detected a leftward WM asymmetry in the auditory cortex in non-stutterers, while stutterers showed symmetric WM volumes. Conclusions These results provide strong evidence that adults with PDS have anomalous anatomy not only in perisylvian speech and language areas but also in prefrontal and sensorimotor areas. Whether this atypical asymmetry of WM is the cause or the consequence of stuttering is still an unanswered question.

  2. Impairments and compensation in mouth and limb use in free feeding after unilateral dopamine depletions in a rat analog of human Parkinson's disease.

    Science.gov (United States)

    Whishaw, I Q; Coles, B L; Pellis, S M; Miklyaeva, E I

    1997-03-01

    Rats depleted unilaterally of dopamine (DA) with the neurotoxin 6-hydroxydopamine (6-OHDA) have contralateral sensorimotor deficits. These include pronounced impairments in using the contralateral limbs (bad limbs) for skilled movements in tests of reaching and bar pressing. There has been no systematic examination of the changes that take place in movements of spontaneous food handling. This was the purpose of the present study. Rats were filmed as they picked up and ate pieces of angel hair pasta (Capelli d'Angelo), a food item that challenges the rats to use delicate and bilaterally coordinated limb and paw movements. Control rats picked up the food with their incisors, transferred it to their paws, and manipulated it using a variety of bilaterally coordinated limb and paw movements. The DA-depleted rats were impaired in both their mouth and paw movements. They seemed unable to use their teeth to grasp the food and so used their tongue. They did not use the bad side of their mouth to chew and relied upon the good side of their mouth. The bad paw was impaired in grasping the food, grasped only with a whole paw grip, did not make manipulatory movements, and did not open to release the food or open to regain support once the food was eaten. Although the rats improved over a 30-day recovery period, much of the improvement was due to compensatory adjustments. That unilateral DA-depletion results in profound contralateral impairments of the mouth and limb with improvements due mainly to compensatory adjustments confirms a role for dopaminergic systems in motor control. Additionally, the behavioral tests described here could provide important adjuncts for assessing therapies in this animal analog of human Parkinson's disease.

  3. Speech–Language Pathology Evaluation and Management of Hyperkinetic Disorders Affecting Speech and Swallowing Function

    Directory of Open Access Journals (Sweden)

    Julie M. Barkmeier-Kraemer

    2017-09-01

    Full Text Available Background: Hyperkinetic dysarthria is characterized by abnormal involuntary movements affecting respiratory, phonatory, and articulatory structures impacting speech and deglutition. Speech–language pathologists (SLPs play an important role in the evaluation and management of dysarthria and dysphagia. This review describes the standard clinical evaluation and treatment approaches by SLPs for addressing impaired speech and deglutition in specific hyperkinetic dysarthria populations.Methods: A literature review was conducted using the data sources of PubMed, Cochrane Library, and Google Scholar. Search terms included 1 hyperkinetic dysarthria, essential voice tremor, voice tremor, vocal tremor, spasmodic dysphonia, spastic dysphonia, oromandibular dystonia, Meige syndrome, orofacial, cervical dystonia, dystonia, dyskinesia, chorea, Huntington’s Disease, myoclonus; and evaluation/treatment terms: 2 Speech–Language Pathology, Speech Pathology, Evaluation, Assessment, Dysphagia, Swallowing, Treatment, Management, and diagnosis.Results: The standard SLP clinical speech and swallowing evaluation of chorea/Huntington’s disease, myoclonus, focal and segmental dystonia, and essential vocal tremor typically includes 1 case history; 2 examination of the tone, symmetry, and sensorimotor function of the speech structures during non-speech, speech and swallowing relevant activities (i.e., cranial nerve assessment; 3 evaluation of speech characteristics; and 4 patient self-report of the impact of their disorder on activities of daily living. SLP management of individuals with hyperkinetic dysarthria includes behavioral and compensatory strategies for addressing compromised speech and intelligibility. Swallowing disorders are managed based on individual symptoms and the underlying pathophysiology determined during evaluation.Discussion: SLPs play an important role in contributing to the differential diagnosis and management of impaired speech and

  4. Priorities of Dialogic Speech Teaching Methodology at Higher Non-Linguistic School

    Directory of Open Access Journals (Sweden)

    Vida Asanavičienė

    2011-04-01

    Full Text Available The article deals with a number of relevant methodological issues. First of all, the author analyses psychological peculiarities of dialogic speech and states that the dialogue is the product of at least two persons. Therefore, in this view, dialogic speech, unlike monologic speech, happens impromptu and is not prepared in advance. Dialogic speech is mainly of situational character. The linguistic nature of dialogic speech, in the author’s opinion, lies in the process of exchanging replications, which are coherent in structural and functional character. The author classifies dialogue groups by the number of replications and communicative parameters. The basic goal of dialogic speech teaching is developing the abilities and skills which enable to exchange replications. The author distinguishes two basic stages of dialogic speech teaching: 1. Training of abilities to exchange replications during communicative exercises. 2. Development of skills by training the capability to perform exercises of creative nature during a group dialogue, conversation or debate.

  5. Dry Mouth (Xerostomia)

    Science.gov (United States)

    ... mouth Trouble chewing, swallowing, tasting, or speaking A burning feeling in the mouth A dry feeling in the throat Cracked lips ... Food and Drug Administration provides information on dry mouth and offers advice for ... Syndrome Clinic NIDCR Sjogren’s Syndrome Clinic develops new therapies ...

  6. Somatic and movement inductions phantom limb in non-amputees

    Science.gov (United States)

    Casas, D. M.; Gentiletti, G. G.; Braidot, A. A.

    2016-04-01

    The illusion of the mirror box is a tool for phantom limb pain treatment; this article proposes the induction of phantom limb syndrome on non-amputees upper limb, with a neurological trick of the mirror box. With two study situations: a) Somatic Induction is a test of the literature reports qualitatively, and novel proposal b) Motor Induction, which is an objective report by recording surface EEG. There are 3 cases proposed for Motor illusion, for which grasped movement is used: 1) Control: movement is made, 2) illusion: the mirror box is used, and 3) Imagination: no movement is executed; the subject only imagines its execution. Three different tasks are registered for each one of them (left hand, right hand, and both of them). In 64% of the subjects for somatic experience, a clear response to the illusion was observed. In the experience of motor illusion, cortical activation is detected in both hemispheres of the primary motor cortex during the illusion, where the hidden hand remains motionless. These preliminary findings in phantom limb on non-amputees can be a tool for neuro-rehabilitation and neuro-prosthesis control training.

  7. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  8. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    Science.gov (United States)

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  9. Motor cortex hand area and speech: implications for the development of language.

    Science.gov (United States)

    Meister, Ingo Gerrit; Boroojerdi, Babak; Foltys, Henrik; Sparing, Roland; Huber, Walter; Töpper, Rudolf

    2003-01-01

    Recently a growing body of evidence has suggested that a functional link exists between the hand motor area of the language dominant hemisphere and the regions subserving language processing. We examined the excitability of the hand motor area and the leg motor area during reading aloud and during non-verbal oral movements using transcranial magnetic stimulation (TMS). During reading aloud, but not before or afterwards, excitability was increased in the hand motor area of the dominant hemisphere. This reading effect was found to be independent of the duration of speech. No such effect could be found in the contralateral hemisphere. The excitability of the leg area of the motor cortex remained unchanged during reading aloud. The excitability during non-verbal oral movements was slightly increased in both hemispheres. Our results are consistent with previous findings and may indicate a specific functional connection between the hand motor area and the cortical language network.

  10. The role of gestures in spatial working memory and speech.

    Science.gov (United States)

    Morsella, Ezequiel; Krauss, Robert M

    2004-01-01

    Co-speech gestures traditionally have been considered communicative, but they may also serve other functions. For example, hand-arm movements seem to facilitate both spatial working memory and speech production. It has been proposed that gestures facilitate speech indirectly by sustaining spatial representations in working memory. Alternatively, gestures may affect speech production directly by activating embodied semantic representations involved in lexical search. Consistent with the first hypothesis, we found participants gestured more when describing visual objects from memory and when describing objects that were difficult to remember and encode verbally. However, they also gestured when describing a visually accessible object, and gesture restriction produced dysfluent speech even when spatial memory was untaxed, suggesting that gestures can directly affect both spatial memory and lexical retrieval.

  11. Do long-term tongue piercings affect speech quality?

    Science.gov (United States)

    Heinen, Esther; Birkholz, Peter; Willmes, Klaus; Neuschaefer-Rube, Christiane

    2017-10-01

    To explore possible effects of tongue piercing on perceived speech quality. Using a quasi-experimental design, we analyzed the effect of tongue piercing on speech in a perception experiment. Samples of spontaneous speech and read speech were recorded from 20 long-term pierced and 20 non-pierced individuals (10 males, 10 females each). The individuals having a tongue piercing were recorded with attached and removed piercing. The audio samples were blindly rated by 26 female and 20 male laypersons and by 5 female speech-language pathologists with regard to perceived speech quality along 5 dimensions: speech clarity, speech rate, prosody, rhythm and fluency. We found no statistically significant differences for any of the speech quality dimensions between the pierced and non-pierced individuals, neither for the read nor for the spontaneous speech. In addition, neither length nor position of piercing had a significant effect on speech quality. The removal of tongue piercings had no effects on speech performance either. Rating differences between laypersons and speech-language pathologists were not dependent on the presence of a tongue piercing. People are able to perfectly adapt their articulation to long-term tongue piercings such that their speech quality is not perceptually affected.

  12. Mapping (and modeling) physiological movements during EEG-fMRI recordings: the added value of the video acquired simultaneously.

    Science.gov (United States)

    Ruggieri, Andrea; Vaudano, Anna Elisabetta; Benuzzi, Francesca; Serafini, Marco; Gessaroli, Giuliana; Farinelli, Valentina; Nichelli, Paolo Frigio; Meletti, Stefano

    2015-01-15

    During resting-state EEG-fMRI studies in epilepsy, patients' spontaneous head-face movements occur frequently. We tested the usefulness of synchronous video recording to identify and model the fMRI changes associated with non-epileptic movements to improve sensitivity and specificity of fMRI maps related to interictal epileptiform discharges (IED). Categorization of different facial/cranial movements during EEG-fMRI was obtained for 38 patients [with benign epilepsy with centro-temporal spikes (BECTS, n=16); with idiopathic generalized epilepsy (IGE, n=17); focal symptomatic/cryptogenic epilepsy (n=5)]. We compared at single subject- and at group-level the IED-related fMRI maps obtained with and without additional regressors related to spontaneous movements. As secondary aim, we considered facial movements as events of interest to test the usefulness of video information to obtain fMRI maps of the following face movements: swallowing, mouth-tongue movements, and blinking. Video information substantially improved the identification and classification of the artifacts with respect to the EEG observation alone (mean gain of 28 events per exam). Inclusion of physiological activities as additional regressors in the GLM model demonstrated an increased Z-score and number of voxels of the global maxima and/or new BOLD clusters in around three quarters of the patients. Video-related fMRI maps for swallowing, mouth-tongue movements, and blinking were comparable to the ones obtained in previous task-based fMRI studies. Video acquisition during EEG-fMRI is a useful source of information. Modeling physiological movements in EEG-fMRI studies for epilepsy will lead to more informative IED-related fMRI maps in different epileptic conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Modelling the Architecture of Phonetic Plans: Evidence from Apraxia of Speech

    Science.gov (United States)

    Ziegler, Wolfram

    2009-01-01

    In theories of spoken language production, the gestural code prescribing the movements of the speech organs is usually viewed as a linear string of holistic, encapsulated, hard-wired, phonetic plans, e.g., of the size of phonemes or syllables. Interactions between phonetic units on the surface of overt speech are commonly attributed to either the…

  14. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use.

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Wagener, Kirsten C; Meis, Markus; Colonius, Hans

    2017-01-01

    Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners ( mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age , and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.

  15. Silent Speech Recognition as an Alternative Communication Device for Persons with Laryngectomy.

    Science.gov (United States)

    Meltzner, Geoffrey S; Heaton, James T; Deng, Yunbin; De Luca, Gianluca; Roy, Serge H; Kline, Joshua C

    2017-12-01

    Each year thousands of individuals require surgical removal of their larynx (voice box) due to trauma or disease, and thereby require an alternative voice source or assistive device to verbally communicate. Although natural voice is lost after laryngectomy, most muscles controlling speech articulation remain intact. Surface electromyographic (sEMG) activity of speech musculature can be recorded from the neck and face, and used for automatic speech recognition to provide speech-to-text or synthesized speech as an alternative means of communication. This is true even when speech is mouthed or spoken in a silent (subvocal) manner, making it an appropriate communication platform after laryngectomy. In this study, 8 individuals at least 6 months after total laryngectomy were recorded using 8 sEMG sensors on their face (4) and neck (4) while reading phrases constructed from a 2,500-word vocabulary. A unique set of phrases were used for training phoneme-based recognition models for each of the 39 commonly used phonemes in English, and the remaining phrases were used for testing word recognition of the models based on phoneme identification from running speech. Word error rates were on average 10.3% for the full 8-sensor set (averaging 9.5% for the top 4 participants), and 13.6% when reducing the sensor set to 4 locations per individual (n=7). This study provides a compelling proof-of-concept for sEMG-based alaryngeal speech recognition, with the strong potential to further improve recognition performance.

  16. 75 FR 50880 - TRICARE: Non-Physician Referrals for Physical Therapy, Occupational Therapy, and Speech Therapy

    Science.gov (United States)

    2010-08-18

    ... 0720-AB36 TRICARE: Non-Physician Referrals for Physical Therapy, Occupational Therapy, and Speech... referrals of beneficiaries to the Military Health System for physical therapy, occupational therapy, and... practitioners will be allowed to issue referrals to patients for physical therapy, occupational therapy, and...

  17. Comparative evaluation of six ELISAs for the detection of antibodies to the non-structural proteins of foot-and-mouth disease virus

    DEFF Research Database (Denmark)

    Brocchi, E.; Bergmann, I.E.; Dekker, A.

    2006-01-01

    To validate the use of serology in substantiating freedom from infection after foot-and-mouth disease (FMD) outbreaks have been controlled by measures that include vaccination, 3551 sera were tested with six assays that detect antibodies to the non-structural proteins of FMD virus. The sera came...

  18. Filtering the Unknown: Speech Activity Detection in Heterogeneous Video Collections

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; Wooters, Chuck; Ordelman, Roeland J.F.

    2007-01-01

    In this paper we discuss the speech activity detection system that we used for detecting speech regions in the Dutch TRECVID video collection. The system is designed to filter non-speech like music or sound effects out of the signal without the use of predefined non-speech models. Because the system

  19. Children with speech sound disorder: Comparing a non-linguistic auditory approach with a phonological intervention approach to improve phonological skills

    Directory of Open Access Journals (Sweden)

    Cristina eMurphy

    2015-02-01

    Full Text Available This study aimed to compare the effects of a non-linguistic auditory intervention approach with a phonological intervention approach on the phonological skills of children with speech sound disorder. A total of 17 children, aged 7-12 years, with speech sound disorder were randomly allocated to either the non-linguistic auditory temporal intervention group (n = 10, average age 7.7 ± 1.2 or phonological intervention group (n = 7, average age 8.6 ± 1.2. The intervention outcomes included auditory-sensory measures (auditory temporal processing skills and cognitive measures (attention, short-term memory, speech production and phonological awareness skills. The auditory approach focused on non-linguistic auditory training (eg. backward masking and frequency discrimination, whereas the phonological approach focused on speech sound training (eg. phonological organisation and awareness. Both interventions consisted of twelve 45-minute sessions delivered twice per week, for a total of nine hours. Intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. No significant improvement on phonological skills was observed in any of the groups. Inter-group analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. Therefore, both analyses suggest that although the non-linguistic auditory intervention approach appeared to be the most effective intervention approach, it was not sufficient to promote the enhancement of phonological skills.

  20. Burning mouth syndrome: Clinical description, pathophysiological approach, and a new therapeutic option.

    Science.gov (United States)

    Cárcamo Fonfría, A; Gómez-Vicente, L; Pedraza, M I; Cuadrado-Pérez, M L; Guerrero Peral, A L; Porta-Etessam, J

    2017-05-01

    Burning mouth syndrome is defined as scorching sensation in the mouth in the absence of any local lesions or systemic disease that would explain that complaint. The condition responds poorly to commonly used treatments and it may become very disabling. We prospectively analysed the clinical and demographic characteristics and response to treatment in 6 cases of burning mouth syndrome, diagnosed at 2 tertiary hospital headache units. Six female patients between the ages of 34 and 82 years reported symptoms compatible with burning mouth syndrome. In 5 of them, burning worsened at the end of the day; 4 reported symptom relief with tongue movements. Neurological examinations and laboratory findings were normal in all patients and their dental examinations revealed no buccal lesions. Each patient had previously received conventional treatments without amelioration. Pramipexol was initiated in doses between 0.36mg and 1.05mg per day, resulting in clear improvement of symptoms in all cases, a situation which continues after a 4-year follow up period. Burning mouth syndrome is a condition of unknown aetiology that shares certain clinical patterns and treatment responses with restless leg syndrome. Dopamine agonists should be regarded as first line treatment for this entity. Copyright © 2015 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Update knowledge of dry mouth- A guideline for dentists

    African Journals Online (AJOL)

    Results: There are no clearly established protocols for the treatment of dry mouth in the ... both sexes and it was more frequent at night than during .... Cancer therapy .... drugs versus non-drug active therapies for non-neurogenic overactive.

  2. Mouth ulcers

    Science.gov (United States)

    ... Gingivostomatitis Herpes simplex ( fever blister ) Leukoplakia Oral cancer Oral lichen planus Oral thrush A skin sore caused by histoplasmosis may ... mouth Images Oral thrush Canker sore (aphthous ulcer) Lichen planus on the oral mucosa Mouth sores References Daniels TE, Jordan RC. ...

  3. Speech rate in Parkinson's disease: A controlled study.

    Science.gov (United States)

    Martínez-Sánchez, F; Meilán, J J G; Carro, J; Gómez Íñiguez, C; Millian-Morell, L; Pujante Valverde, I M; López-Alburquerque, T; López, D E

    2016-09-01

    Speech disturbances will affect most patients with Parkinson's disease (PD) over the course of the disease. The origin and severity of these symptoms are of clinical and diagnostic interest. To evaluate the clinical pattern of speech impairment in PD patients and identify significant differences in speech rate and articulation compared to control subjects. Speech rate and articulation in a reading task were measured using an automatic analytical method. A total of 39 PD patients in the 'on' state and 45 age-and sex-matched asymptomatic controls participated in the study. None of the patients experienced dyskinesias or motor fluctuations during the test. The patients with PD displayed a significant reduction in speech and articulation rates; there were no significant correlations between the studied speech parameters and patient characteristics such as L-dopa dose, duration of the disorder, age, and UPDRS III scores and Hoehn & Yahr scales. Patients with PD show a characteristic pattern of declining speech rate. These results suggest that in PD, disfluencies are the result of the movement disorder affecting the physiology of speech production systems. Copyright © 2014 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. Influence of Language Load on Speech Motor Skill in Children With Specific Language Impairment.

    Science.gov (United States)

    Saletta, Meredith; Goffman, Lisa; Ward, Caitlin; Oleson, Jacob

    2018-03-15

    Children with specific language impairment (SLI) show particular deficits in the generation of sequenced action: the quintessential procedural task. Practiced imitation of a sequence may become rote and require reduced procedural memory. This study explored whether speech motor deficits in children with SLI occur generally or only in conditions of high linguistic load, whether speech motor deficits diminish with practice, and whether it is beneficial to incorporate conditions of high load to understand speech production. Children with SLI and typical development participated in a syntactic priming task during which they generated sentences (high linguistic load) and, then, practiced repeating a sentence (low load) across 3 sessions. We assessed phonetic accuracy, speech movement variability, and duration. Children with SLI produced more variable articulatory movements than peers with typical development in the high load condition. The groups converged in the low load condition. Children with SLI continued to show increased articulatory stability over 3 practice sessions. Both groups produced generated sentences with increased duration and variability compared with repeated sentences. Linguistic demands influence speech motor production. Children with SLI show reduced speech motor performance in tasks that require language generation but not when task demands are reduced in rote practice.

  5. Decoding speech perception by native and non-native speakers using single-trial electrophysiological data.

    Directory of Open Access Journals (Sweden)

    Alex Brandmeyer

    Full Text Available Brain-computer interfaces (BCIs are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1 Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2 Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native. A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.

  6. Development and evolution of the vertebrate primary mouth

    Science.gov (United States)

    Soukup, Vladimír; Horácek, Ivan; Cerny, Robert

    2013-01-01

    gastrulation, which initiates the process and constrains possible evolutionary changes within this area; third, incipient structure of the stomodeal primordium at the anterior neural plate border, where the ectoderm component of the prospective primary mouth is formed; and fourth, the prime role of Pitx genes for establishment and later morphogenesis of oral region both in vertebrates and non-vertebrate chordates. PMID:22804777

  7. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  8. Emotion recognition from speech: tools and challenges

    Science.gov (United States)

    Al-Talabani, Abdulbasit; Sellahewa, Harin; Jassim, Sabah A.

    2015-05-01

    Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more "natural" than the acted datasets where the subjects attempt to suppress all but one emotion.

  9. Word of Mouth Marketing in Mouth and Dental Health Centers towards Consumers

    Directory of Open Access Journals (Sweden)

    Aykut Ekiyor

    2014-09-01

    Full Text Available Influencing the shopping style of others by passing on the experiences of goods purchased or services received is a way of behavior that has its roots in history. The main objective of th is research is to analyze the effects of demographic factors within the scope of word of mouth marketing on the choices of mouth and dental health services. Consumers receiving service from mouth and dental health centers of the Turkish Republic Ministry o f Health constitute the environment of the research. The research conducted in order to determine the mouth and dental health center selection of consumers within the scope of word of mouth marketing. The research has been conducted in Ankara through simpl e random sampling. The sample size has been determined as 400. In terms of word of mouth marketing which has been determined as the third hypothesis of the study, as a result of the analysis of the statistical relationship between mouth and dental health c enter preference and demographic factor groups, it has been determined that there is a meaningful difference in terms of age, level of education, level of income and some dimensions of marital status and that no meaningful difference has been found in term s of gender. It has been attempted to determine the importance of word of mouth marketing in healthcare services

  10. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Discrimination and preference of speech and non-speech sounds in autism patients%孤独症患者言语及非言语声音辨识和偏好特征

    Institute of Scientific and Technical Information of China (English)

    王崇颖; 江鸣山; 徐旸; 马斐然; 石锋

    2011-01-01

    Objective:To explore the discrimination and preference of speech and non-speech sounds in autism patients. Methods: Ten people (5 children vs. 5 adults) diagnosed with autism according to the criteria of Diagnostic and Statistical Manual of Mental Disorders. Fourth Version ( DSM-Ⅳ) were selected from database of Nankai University Center for Behavioural Science. Together with 10 healthy controls with matched age, people with autism were tested by three experiments on speech sounds, pure tone and intonation which were recorded and modified by Praat, a voice analysis software. Their discrimination and preference were collected orally. The exact probability values were calculated. Results: The results showed that there were no significant differences on the discrimination of speech sounds, pure tone and intonation between autism patients and controls ( P > 0. 05) while controls preferred speech and non-speech sounds with higher pitch than autism ( e. g. , - 100Hz/ +50Hz. 2 vs. 7. P < 0. 05:50Hz/250Hz. 4 vs. 10. P < 0. 05) and autism preferred non-speech sounds with lower pitch ( 100Hz/250Hz. 6 vs. 3.P < 0. 05). No significant difference on the preference of intonation between autism and controls ( P > 0. 05) was found. Conclusion:lt shows that people with autism have impaired auditory processing on speech and non-speech sounds.%目的:探究孤独症患者对言语及非言语声音的辨识和偏好特征.方法:从南开大学医学院行为医学中心患者数据库中选取根据美国精神障碍诊断与统计手册第4版(DSM-Ⅳ)诊断标准确诊的孤独症患者10名(儿童和成年人各5例),选取与年龄匹配的正常对照10名.所有被试均接受由专业的语音软件Praat录制和生成的语音音高、纯音音高和韵律的实验测试,口头报告其对言语及非言语声音的辨识和偏好结果.结果:孤独症患者在语音音高、纯音音高和韵律的辨识上和正常对照组差异无统计学意义(均P>0.05).

  12. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...-Speech Services for Individuals with Hearing and Speech Disabilities, Report and Order (Order), document...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...

  13. Non linear analyses of speech and prosody in Asperger's syndrome

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Bang, Dan; Weed, Ethan

    It is widely acknowledged that people on the ASD spectrum behave atypically in the way they modulate aspects of speech and voice, including pitch, fluency, and voice quality. ASD speech has been described at times as “odd”, “mechanical”, or “monotone”. However, it has proven difficult to quantify...... the results in a supervised machine-learning process to classify speech production as either belonging to the control or the AS group as well as to assess the severity of the disorder (as measured by Autism Spectrum Quotient), based solely on acoustic features....

  14. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  15. Visemic Processing in Audiovisual Discrimination of Natural Speech: A Simultaneous fMRI-EEG Study

    Science.gov (United States)

    Dubois, Cyril; Otzenberger, Helene; Gounot, Daniel; Sock, Rudolph; Metz-Lutz, Marie-Noelle

    2012-01-01

    In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a…

  16. Effects of Synthetic Speech Output on Requesting and Natural Speech Production in Children with Autism: A Preliminary Study

    Science.gov (United States)

    Schlosser, Ralf W.; Sigafoos, Jeff; Luiselli, James K.; Angermeier, Katie; Harasymowyz, Ulana; Schooley, Katherine; Belfiore, Phil J.

    2007-01-01

    Requesting is often taught as an initial target during augmentative and alternative communication intervention in children with autism. Speech-generating devices are purported to have advantages over non-electronic systems due to their synthetic speech output. On the other hand, it has been argued that speech output, being in the auditory…

  17. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  18. Strain Map of the Tongue in Normal and ALS Speech Patterns from Tagged and Diffusion MRI.

    Science.gov (United States)

    Xing, Fangxu; Prince, Jerry L; Stone, Maureen; Reese, Timothy G; Atassi, Nazem; Wedeen, Van J; El Fakhri, Georges; Woo, Jonghye

    2018-02-01

    Amyotrophic Lateral Sclerosis (ALS) is a neurological disease that causes death of neurons controlling muscle movements. Loss of speech and swallowing functions is a major impact due to degeneration of the tongue muscles. In speech studies using magnetic resonance (MR) techniques, diffusion tensor imaging (DTI) is used to capture internal tongue muscle fiber structures in three-dimensions (3D) in a non-invasive manner. Tagged magnetic resonance images (tMRI) are used to record tongue motion during speech. In this work, we aim to combine information obtained with both MR imaging techniques to compare the functionality characteristics of the tongue between normal and ALS subjects. We first extracted 3D motion of the tongue using tMRI from fourteen normal subjects in speech. The estimated motion sequences were then warped using diffeomorphic registration into the b0 spaces of the DTI data of two normal subjects and an ALS patient. We then constructed motion atlases by averaging all warped motion fields in each b0 space, and computed strain in the line of action along the muscle fiber directions provided by tractography. Strain in line with the fiber directions provides a quantitative map of the potential active region of the tongue during speech. Comparison between normal and ALS subjects explores the changing volume of compressing tongue tissues in speech facing the situation of muscle degradation. The proposed framework provides for the first time a dynamic map of contracting fibers in ALS speech patterns, and has the potential to provide more insight into the detrimental effects of ALS on speech.

  19. Asymmetry in infants’ selective attention to facial features during visual processing of infant-directed speech

    Directory of Open Access Journals (Sweden)

    Nicholas A Smith

    2013-09-01

    Full Text Available Two experiments used eye tracking to examine how infant and adult observers distribute their eye gaze on videos of a mother producing infant- and adult-directed speech. Both groups showed greater attention to the eyes than to the nose and mouth, as well as an asymmetrical focus on the talker’s right eye for infant-directed speech stimuli. Observers continued to look more at the talker’s apparent right eye when the video stimuli were mirror flipped, suggesting that the asymmetry reflects a perceptual processing bias rather than a stimulus artifact, which may be related to cerebral lateralization of emotion processing.

  20. The impact of threat and cognitive stress on speech motor control in people who stutter.

    Science.gov (United States)

    Lieshout, Pascal van; Ben-David, Boaz; Lipski, Melinda; Namasivayam, Aravind

    2014-06-01

    In the present study, an Emotional Stroop and Classical Stroop task were used to separate the effect of threat content and cognitive stress from the phonetic features of words on motor preparation and execution processes. A group of 10 people who stutter (PWS) and 10 matched people who do not stutter (PNS) repeated colour names for threat content words and neutral words, as well as for traditional Stroop stimuli. Data collection included speech acoustics and movement data from upper lip and lower lip using 3D EMA. PWS in both tasks were slower to respond and showed smaller upper lip movement ranges than PNS. For the Emotional Stroop task only, PWS were found to show larger inter-lip phase differences compared to PNS. General threat words were executed with faster lower lip movements (larger range and shorter duration) in both groups, but only PWS showed a change in upper lip movements. For stutter specific threat words, both groups showed a more variable lip coordination pattern, but only PWS showed a delay in reaction time compared to neutral words. Individual stuttered words showed no effects. Both groups showed a classical Stroop interference effect in reaction time but no changes in motor variables. This study shows differential motor responses in PWS compared to controls for specific threat words. Cognitive stress was not found to affect stuttering individuals differently than controls or that its impact spreads to motor execution processes. After reading this article, the reader will be able to: (1) discuss the importance of understanding how threat content influences speech motor control in people who stutter and non-stuttering speakers; (2) discuss the need to use tasks like the Emotional Stroop and Regular Stroop to separate phonetic (word-bound) based impact on fluency from other factors in people who stutter; and (3) describe the role of anxiety and cognitive stress on speech motor processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Foot-and-mouth disease

    DEFF Research Database (Denmark)

    Belsham, Graham; Charleston, Bryan; Jackson, Terry

    2009-01-01

    Foot-and-mouth disease is an economically important, highly contagious, disease of cloven-hoofed animals characterized by the appearance of vesicles (blisters) on the feet and in and around the mouth. The causative agent, foot-and-mouth disease virus, was the first mammalian virus to be discovered...

  2. Foot-and-Mouth Disease

    DEFF Research Database (Denmark)

    Belsham, Graham; Charleston, Bryan; Jackson, Terry

    2015-01-01

    Foot‐and‐mouth disease (FMD) is an economically important, highly contagious disease of cloven‐hoofed animals characterised by the appearance of vesicles (blisters) on the feet and in, and around, the mouth. The causative agent, foot‐and‐mouth disease virus (FMDV), was the first mammalian virus...

  3. Traffic conflict assessment for non-lane-based movements of motorcycles under congested conditions

    Directory of Open Access Journals (Sweden)

    Long Xuan Nguyen

    2014-03-01

    Full Text Available Traffic conflict under congested conditions is one of the main safety issues of motorcycle traffic in developing countries. Unlike cars, motorcycles often display non-lane-based movements such as swerving or oblique following of a lead vehicle when traffic becomes congested. Very few studies have quantitatively evaluated the effects of such non-lane-based movements on traffic conflict. Therefore, in this study we aim to develop an integrated model to assess the traffic conflict of motorcycles under congested conditions. The proposed model includes a concept of safety space to describe the non-lane-based movements unique to motorcycles, new features developed for traffic conflict assessment such as parameters of acceleration and deceleration, and the conditions for choosing a lead vehicle. Calibration data were extracted from video clips taken at two road segments in Ho Chi Minh City. A simulation based on the model was developed to verify the dynamic non-lane-based movements of motorcycles. Subsequently, the assessment of traffic conflict was validated by calculating the probability of sudden braking at each time interval according to the change in the density of motorcycle flow. Our findings underscore the fact that higher flow density may lead to conflicts associated with a greater probability of sudden breaking. Three types of motorcycle traffic conflicts were confirmed, and the proportions of each type were calculated and discussed.

  4. The equilibrium point hypothesis and its application to speech motor control.

    Science.gov (United States)

    Perrier, P; Ostry, D J; Laboissière, R

    1996-04-01

    In this paper, we address a number of issues in speech research in the context of the equilibrium point hypothesis of motor control. The hypothesis suggests that movements arise from shifts in the equilibrium position of the limb or the speech articulator. The equilibrium is a consequence of the interaction of central neural commands, reflex mechanisms, muscle properties, and external loads, but it is under the control of central neural commands. These commands act to shift the equilibrium via centrally specified signals acting at the level of the motoneurone (MN) pool. In the context of a model of sagittal plane jaw and hyoid motion based on the lambda version of the equilibrium point hypothesis, we consider the implications of this hypothesis for the notion of articulatory targets. We suggest that simple linear control signals may underlie smooth articulatory trajectories. We explore as well the phenomenon of intraarticulator coarticulation in jaw movement. We suggest that even when no account is taken of upcoming context, that apparent anticipatory changes in movement amplitude and duration may arise due to dynamics. We also present a number of simulations that show in different ways how variability in measured kinematics can arise in spite of constant magnitude speech control signals.

  5. Voice Activity Detection. Fundamentals and Speech Recognition System Robustness

    OpenAIRE

    Ramirez, J.; Gorriz, J. M.; Segura, J. C.

    2007-01-01

    This chapter has shown an overview of the main challenges in robust speech detection and a review of the state of the art and applications. VADs are frequently used in a number of applications including speech coding, speech enhancement and speech recognition. A precise VAD extracts a set of discriminative speech features from the noisy speech and formulates the decision in terms of well defined rule. The chapter has summarized three robust VAD methods that yield high speech/non-speech discri...

  6. Optimizing Automatic Speech Recognition for Low-Proficient Non-Native Speakers

    Directory of Open Access Journals (Sweden)

    Catia Cucchiarini

    2010-01-01

    Full Text Available Computer-Assisted Language Learning (CALL applications for improving the oral skills of low-proficient learners have to cope with non-native speech that is particularly challenging. Since unconstrained non-native ASR is still problematic, a possible solution is to elicit constrained responses from the learners. In this paper, we describe experiments aimed at selecting utterances from lists of responses. The first experiment on utterance selection indicates that the decoding process can be improved by optimizing the language model and the acoustic models, thus reducing the utterance error rate from 29–26% to 10–8%. Since giving feedback on incorrectly recognized utterances is confusing, we verify the correctness of the utterance before providing feedback. The results of the second experiment on utterance verification indicate that combining duration-related features with a likelihood ratio (LR yield an equal error rate (EER of 10.3%, which is significantly better than the EER for the other measures in isolation.

  7. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  8. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech

  9. Evaluation of movements of lower limbs in non-professional ballet dancers: hip abduction and flexion

    Directory of Open Access Journals (Sweden)

    Valenti Erica E

    2011-08-01

    Full Text Available Abstract Background The literature indicated that the majority of professional ballet dancers present static and active dynamic range of motion difference between left and right lower limbs, however, no previous study focused this difference in non-professional ballet dancers. In this study we aimed to evaluate active movements of the hip in non-professional classical dancers. Methods We evaluated 10 non professional ballet dancers (16-23 years old. We measured the active range of motion and flexibility through Well Banks. We compared active range of motion between left and right sides (hip flexion and abduction and performed correlation between active movements and flexibility. Results There was a small difference between the right and left sides of the hip in relation to the movements of flexion and abduction, which suggest the dominant side of the subjects, however, there was no statistical significance. Bank of Wells test revealed statistical difference only between the 1st and the 3rd measurement. There was no correlation between the movements of the hip (abduction and flexion, right and left sides with the three test measurements of the bank of Wells. Conclusion There is no imbalance between the sides of the hip with respect to active abduction and flexion movements in non-professional ballet dancers.

  10. Evaluation of movements of lower limbs in non-professional ballet dancers: hip abduction and flexion

    Science.gov (United States)

    2011-01-01

    Background The literature indicated that the majority of professional ballet dancers present static and active dynamic range of motion difference between left and right lower limbs, however, no previous study focused this difference in non-professional ballet dancers. In this study we aimed to evaluate active movements of the hip in non-professional classical dancers. Methods We evaluated 10 non professional ballet dancers (16-23 years old). We measured the active range of motion and flexibility through Well Banks. We compared active range of motion between left and right sides (hip flexion and abduction) and performed correlation between active movements and flexibility. Results There was a small difference between the right and left sides of the hip in relation to the movements of flexion and abduction, which suggest the dominant side of the subjects, however, there was no statistical significance. Bank of Wells test revealed statistical difference only between the 1st and the 3rd measurement. There was no correlation between the movements of the hip (abduction and flexion, right and left sides) with the three test measurements of the bank of Wells. Conclusion There is no imbalance between the sides of the hip with respect to active abduction and flexion movements in non-professional ballet dancers. PMID:21819566

  11. The Influence of Electronic Word-of-Mouth on College Search and Choice

    Science.gov (United States)

    Lehmann, Whitney

    2017-01-01

    This study used an online questionnaire to survey first-time, non-transfer undergraduate freshmen students at the University of Miami to determine the perceived influence of electronic word-of-mouth (eWOM) on their college search and choice compared to that of traditional word-of-mouth (WOM). In addition, eWOM's influence was examined during the…

  12. Dry Mouth (Xerostomia)

    Science.gov (United States)

    ... Finding Dental Care Home Health Info Health Topics Dry Mouth Saliva, or spit, is made by the salivary ... help keep teeth strong and fight tooth decay. Dry mouth, also called xerostomia (ZEER-oh-STOH-mee-ah), ...

  13. Risk of equine infectious disease transmission by non-race horse movements in Japan.

    Science.gov (United States)

    Hayama, Yoko; Kobayashi, Sota; Nishida, Takeshi; Nishiguchi, Akiko; Tsutsui, Toshiyuki

    2010-07-01

    For determining surveillance programs or infectious disease countermeasures, risk evaluation approaches have been recently undertaken in the field of animal health. In the present study, to help establish efficient and effective surveillance and countermeasures for equine infectious diseases, we evaluated the potential risk of equine infectious disease transmission in non-race horses from the viewpoints of horse movements and health management practices by conducting a survey of non-race horse holdings. From the survey, the non-race horse population was classified into the following five sectors based on their purposes: the equestrian sector, private owner sector, exhibition sector, fattening sector and others. Our survey results showed that the equestrian and private owner sectors had the largest population sizes, and movements between and within these sectors occurred quite frequently, while there was little movement in the other sectors. Qualitative evaluation showed that the equestrian and private owner sectors had relatively high risks of equine infectious disease transmission through horse movements. Therefore, it would be effective to concentrate on these two sectors when implementing surveillance or preventative measures. Special priority should be given to the private owner sector because this sector has not implemented inspection and vaccination well compared with the equestrian sector, which possesses a high compliance rate for these practices. This qualitative risk evaluation focused on horse movements and health management practices could provide a basis for further risk evaluation to establish efficient and effective surveillance and countermeasures for equine infectious diseases.

  14. Individual differences in language ability are related to variation in word recognition, not speech perception: Evidence from eye-movements

    Science.gov (United States)

    McMurray, Bob; Munson, Cheyenne; Tomblin, J. Bruce

    2013-01-01

    Purpose This study examined speech perception deficits associated with individual differences in language ability contrasting auditory, phonological or lexical accounts by asking if lexical competition is differentially sensitive to fine-grained acoustic variation. Methods 74 adolescents with a range of language abilities (including 35 impaired) participated in an experiment based on McMurray, Tanenhaus and Aslin (2002). Participants heard tokens from six 9-step Voice Onset Time (VOT) continua spanning two words (beach/peach, beak/peak, etc), while viewing a screen containing pictures of those words and two unrelated objects. Participants selected the referent while eye-movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Results Eye-movements were sensitive to within-category VOT differences: as VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Conclusions Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences phonological categorization or auditory abilities. PMID:24687026

  15. Individual differences in language ability are related to variation in word recognition, not speech perception: evidence from eye movements.

    Science.gov (United States)

    McMurray, Bob; Munson, Cheyenne; Tomblin, J Bruce

    2014-08-01

    The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Adolescents with a range of language abilities (N = 74, including 35 impaired) participated in an experiment based on McMurray, Tanenhaus, and Aslin (2002). Participants heard tokens from six 9-step voice onset time (VOT) continua spanning 2 words (beach/peach, beak/peak, etc.) while viewing a screen containing pictures of those words and 2 unrelated objects. Participants selected the referent while eye movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Eye movements were sensitive to within-category VOT differences: As VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences in phonological categorization or auditory abilities.

  16. Quadcopter Control Using Speech Recognition

    Science.gov (United States)

    Malik, H.; Darma, S.; Soekirno, S.

    2018-04-01

    This research reported a comparison from a success rate of speech recognition systems that used two types of databases they were existing databases and new databases, that were implemented into quadcopter as motion control. Speech recognition system was using Mel frequency cepstral coefficient method (MFCC) as feature extraction that was trained using recursive neural network method (RNN). MFCC method was one of the feature extraction methods that most used for speech recognition. This method has a success rate of 80% - 95%. Existing database was used to measure the success rate of RNN method. The new database was created using Indonesian language and then the success rate was compared with results from an existing database. Sound input from the microphone was processed on a DSP module with MFCC method to get the characteristic values. Then, the characteristic values were trained using the RNN which result was a command. The command became a control input to the single board computer (SBC) which result was the movement of the quadcopter. On SBC, we used robot operating system (ROS) as the kernel (Operating System).

  17. Non-intrusive speech quality assessment in simplified e-model

    OpenAIRE

    Vozňák, Miroslav

    2012-01-01

    The E-model brings a modern approach to the computation of estimated quality, allowing for easy implementation. One of its advantages is that it can be applied in real time. The method is based on a mathematical computation model evaluating transmission path impairments influencing speech signal, especially delays and packet losses. These parameters, common in an IP network, can affect speech quality dramatically. The paper deals with a proposal for a simplified E-model and its pr...

  18. Speech masking and cancelling and voice obscuration

    Science.gov (United States)

    Holzrichter, John F.

    2013-09-10

    A non-acoustic sensor is used to measure a user's speech and then broadcasts an obscuring acoustic signal diminishing the user's vocal acoustic output intensity and/or distorting the voice sounds making them unintelligible to persons nearby. The non-acoustic sensor is positioned proximate or contacting a user's neck or head skin tissue for sensing speech production information.

  19. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels.

    Science.gov (United States)

    Zourmand, Alireza; Mirhassani, Seyed Mostafa; Ting, Hua-Nong; Bux, Shaik Ismail; Ng, Kwan Hoong; Bilgen, Mehmet; Jalaludin, Mohd Amin

    2014-07-25

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.

  20. Contribution of non-volatile and aroma fractions to in-mouth sensory properties of red wines: wine reconstitution strategies and sensory sorting task.

    Science.gov (United States)

    Sáenz-Navajas, María-Pilar; Campo, Eva; Avizcuri, José Miguel; Valentin, Dominique; Fernández-Zurbano, Purificación; Ferreira, Vicente

    2012-06-30

    This work explores to what extent the aroma or the non-volatile fractions of red wines are responsible for the overall flavor differences perceived in-mouth. For this purpose, 14 samples (4 commercial and 10 reconstituted wines), were sorted by a panel of 30 trained assessors according to their sensory in-mouth similarities. Reconstituted wines were prepared by adding the same volatile fraction (coming from a red wine) to the non-volatile fraction of 10 different red wines showing large differences in perceived astringency. Sorting was performed under three different conditions: (a) no aroma perception: nose-close condition (NA), (b) retronasal aroma perception only (RA), and (c) allowing retro- and involuntary orthonasal aroma perception (ROA). Similarity estimates were derived from the sorting and submitted to multidimensional scaling (MDS) followed by hierarchical cluster analysis (HCA). Results have clearly shown that, globally, aroma perception is not the major driver of the in-mouth sensory perception of red wine, which is undoubtedly primarily driven by the perception of astringency and by the chemical compounds causing it, particularly protein precipitable proanthocyanidins (PAs). However, aroma perception plays a significant role on the perception of sweetness and bitterness. The impact of aroma seems to be more important whenever astringency, total polyphenols and protein precipitable PAs levels are smaller. Results also indicate that when a red-black fruit odor nuance is clearly perceived in conditions in which orthonasal odor perception is allowed, a strong reduction in astringency takes place. Such red-black fruit odor nuance seems to be the result of a specific aroma release pattern as a consequence of the interaction between aroma compounds and the non-volatile matrix. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Practical steps in the rehabilitation of children with speech and ...

    African Journals Online (AJOL)

    2015-06-25

    Jun 25, 2015 ... child's communication develop- ment. Method: ... child's speech and language development. What is .... movement of the bolus from the oral preparatory stage to the oral .... that music is essential to learning oral language.

  2. Early Speech Motor Development: Cognitive and Linguistic Considerations

    Science.gov (United States)

    Nip, Ignatius S. B.; Green, Jordan R.; Marx, David B.

    2009-01-01

    This longitudinal investigation examines developmental changes in orofacial movements occurring during the early stages of communication development. The goals were to identify developmental trends in early speech motor performance and to determine how these trends differ across orofacial behaviors thought to vary in cognitive and linguistic…

  3. Effects of lips and hands on auditory learning of second-language speech sounds.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D

    2010-04-01

    Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English speakers' ability to perceive Japanese vowel length contrasts. Sixty native English speakers participated in 1 of 4 types of training: (a) audio-only; (b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training, participants were given phoneme perception tests that measured their ability to identify short and long vowels in Japanese (e.g., /kato/ vs. /kato/). Although all 4 groups improved from pre- to posttest (replicating previous research), the participants in the audio-mouth condition improved more than those in the audio-only condition, whereas the 2 conditions involving hand gestures did not. Seeing lip movements during training significantly helps learners to perceive difficult second-language phonemic contrasts, but seeing hand gestures does not. The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning.

  4. THE USE OF EXPRESSIVE SPEECH ACTS IN HANNAH MONTANA SESSION 1

    Directory of Open Access Journals (Sweden)

    Nur Vita Handayani

    2015-07-01

    Full Text Available This study aims to describe kinds and forms of expressive speech act in Hannah Montana Session 1. It belongs to descriptive qualitative method. The research object was expressive speech act. The data source was utterances which contain expressive speech acts in the film Hannah Montana Session 1. The researcher used observation method and noting technique in collecting the data. In analyzing the data, descriptive qualitative method was used. The research findings show that there are ten kinds of expressive speech act found in Hannah Montana Session 1, namely expressing apology, expressing thanks, expressing sympathy, expressing attitudes, expressing greeting, expressing wishes, expressing joy, expressing pain, expressing likes, and expressing dislikes. The forms of expressive speech act are direct literal expressive speech act, direct non-literal expressive speech act, indirect literal expressive speech act, and indirect non-literal expressive speech act.

  5. Atypical speech lateralization in adults with developmental coordination disorder demonstrated using functional transcranial Doppler ultrasound.

    Science.gov (United States)

    Hodgson, Jessica C; Hudson, John M

    2017-03-01

    Research using clinical populations to explore the relationship between hemispheric speech lateralization and handedness has focused on individuals with speech and language disorders, such as dyslexia or specific language impairment (SLI). Such work reveals atypical patterns of cerebral lateralization and handedness in these groups compared to controls. There are few studies that examine this relationship in people with motor coordination impairments but without speech or reading deficits, which is a surprising omission given the prevalence of theories suggesting a common neural network underlying both functions. We use an emerging imaging technique in cognitive neuroscience; functional transcranial Doppler (fTCD) ultrasound, to assess whether individuals with developmental coordination disorder (DCD) display reduced left-hemisphere lateralization for speech production compared to control participants. Twelve adult control participants and 12 adults with DCD, but no other developmental/cognitive impairments, performed a word-generation task whilst undergoing fTCD imaging to establish a hemispheric lateralization index for speech production. All participants also completed an electronic peg-moving task to determine hand skill. As predicted, the DCD group showed a significantly reduced left lateralization pattern for the speech production task compared to controls. Performance on the motor skill task showed a clear preference for the dominant hand across both groups; however, the DCD group mean movement times were significantly higher for the non-dominant hand. This is the first study of its kind to assess hand skill and speech lateralization in DCD. The results reveal a reduced leftwards asymmetry for speech and a slower motor performance. This fits alongside previous work showing atypical cerebral lateralization in DCD for other cognitive processes (e.g., executive function and short-term memory) and thus speaks to debates on theories of the links between motor

  6. Accelerometer-based automatic voice onset detection in speech mapping with navigated repetitive transcranial magnetic stimulation.

    Science.gov (United States)

    Vitikainen, Anne-Mari; Mäkelä, Elina; Lioumis, Pantelis; Jousmäki, Veikko; Mäkelä, Jyrki P

    2015-09-30

    The use of navigated repetitive transcranial magnetic stimulation (rTMS) in mapping of speech-related brain areas has recently shown to be useful in preoperative workflow of epilepsy and tumor patients. However, substantial inter- and intraobserver variability and non-optimal replicability of the rTMS results have been reported, and a need for additional development of the methodology is recognized. In TMS motor cortex mappings the evoked responses can be quantitatively monitored by electromyographic recordings; however, no such easily available setup exists for speech mappings. We present an accelerometer-based setup for detection of vocalization-related larynx vibrations combined with an automatic routine for voice onset detection for rTMS speech mapping applying naming. The results produced by the automatic routine were compared with the manually reviewed video-recordings. The new method was applied in the routine navigated rTMS speech mapping for 12 consecutive patients during preoperative workup for epilepsy or tumor surgery. The automatic routine correctly detected 96% of the voice onsets, resulting in 96% sensitivity and 71% specificity. Majority (63%) of the misdetections were related to visible throat movements, extra voices before the response, or delayed naming of the previous stimuli. The no-response errors were correctly detected in 88% of events. The proposed setup for automatic detection of voice onsets provides quantitative additional data for analysis of the rTMS-induced speech response modifications. The objectively defined speech response latencies increase the repeatability, reliability and stratification of the rTMS results. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Musicians do not benefit from differences in fundamental frequency when listening to speech in competing speech backgrounds

    DEFF Research Database (Denmark)

    Madsen, Sara Miay Kim; Whiteford, Kelly L.; Oxenham, Andrew J.

    2017-01-01

    Recent studies disagree on whether musicians have an advantage over non-musicians in understanding speech in noise. However, it has been suggested that musicians may be able to use diferences in fundamental frequency (F0) to better understand target speech in the presence of interfering talkers....... Here we studied a relatively large (N=60) cohort of young adults, equally divided between nonmusicians and highly trained musicians, to test whether the musicians were better able to understand speech either in noise or in a two-talker competing speech masker. The target speech and competing speech...... were presented with either their natural F0 contours or on a monotone F0, and the F0 diference between the target and masker was systematically varied. As expected, speech intelligibility improved with increasing F0 diference between the target and the two-talker masker for both natural and monotone...

  8. Amphioxus mouth after dorso-ventral inversion.

    Science.gov (United States)

    Kaji, Takao; Reimer, James D; Morov, Arseniy R; Kuratani, Shigeru; Yasui, Kinya

    2016-01-01

    Deuterostomes (animals with 'secondary mouths') are generally accepted to develop the mouth independently of the blastopore. However, it remains largely unknown whether mouths are homologous among all deuterostome groups. Unlike other bilaterians, in amphioxus the mouth initially opens on the left lateral side. This peculiar morphology has not been fully explained in the evolutionary developmental context. We studied the developmental process of the amphioxus mouth to understand whether amphioxus acquired a new mouth, and if so, how it is related to or differs from mouths in other deuterostomes. The left first somite in amphioxus produces a coelomic vesicle between the epidermis and pharynx that plays a crucial role in the mouth opening. The vesicle develops in association with the amphioxus-specific Hatschek nephridium, and first opens into the pharynx and then into the exterior as a mouth. This asymmetrical development of the anterior-most somites depends on the Nodal-Pitx signaling unit, and the perturbation of laterality-determining Nodal signaling led to the disappearance of the vesicle, producing a symmetric pair of anterior-most somites that resulted in larvae lacking orobranchial structures. The vesicle expressed bmp2/4, as seen in ambulacrarian coelomic pore-canals, and the mouth did not open when Bmp2/4 signaling was blocked. We conclude that the amphioxus mouth, which uniquely involves a mesodermal coelomic vesicle, shares its evolutionary origins with the ambulacrarian coelomic pore-canal. Our observations suggest that there are at least three types of mouths in deuterostomes, and that the new acquisition of chordate mouths was likely related to the dorso-ventral inversion that occurred in the last common ancestor of chordates.

  9. Correlações entre alterações de fala, respiração oral, dentição e oclusão Correlations between speech disorders, mouth breathing, dentition and occlusion

    Directory of Open Access Journals (Sweden)

    Roberta Lopes de Castro Martinelli

    2011-02-01

    ção. CONCLUSÕES: o ceceio anterior está correlacionado a alterações de dentição e à Classe III de Angle; olheira, eversão do lábio inferior e lábios entreabertos no repouso são adaptações presentes na Classe II-1, não caracterizando respiração oral neste grupo; o acúmulo de saliva nas comissuras labiais foi o sinal de respiração oral que se correlacionou às alterações de dentição.PURPOSE: to check the correlations among speech disorders and mouth breathing symptoms with the type of dentition and occlusion, using video recordings. METHODS: a retrospective study with 397 patients, by studying the shooting script - ROF. Types of speech disorders and mouth breathing symptoms were assessed by Orofacial Motricity Specialist Speech and Language Pathologists and compared with the occlusal types proposed by Angle and with the dentition parameters, both evaluated by an Orthodontist. For the statistical analysis we used the program SPSS (Statistical Package for Social Sciences, version 13.0. For analyzing Spearman correlation, all assessment data were matched and analyzed. The adopted significance level was 5%. RESULTS: Considering speech disorders and dentition and occlusion data, we noted parallelism between distortion and crossbite, imprecision and bone deviation of lower midline line, locking and overjet, locking and overbite, frontal lisp and Angle Class III malocclusion, frontal lisp and malocclusion, frontal lisp and open bite, frontal lisp and crossbite; and frontal lisp and lower midline deviation. We also noted correlated opposition between locking and openbite, locking and bone deviation of lower bone midline, frontal lisp and Angle Class II-1 malocclusion, frontal lisp and overjet; and frontal lisp and overbite. Considering mouth breathing symptoms and dentition and occlusion data, we noted a symptom of parallelism between the protrusion of lower lip and overjet, accumulation of saliva on the labial commissures and crossbite, accumulation of saliva on

  10. KAJIAN LITERATUR: MANAKAH YANG LEBIH EFEKTIF? TRADITIONAL WORD OF MOUTH ATAU ELECTRONIC WORD OF MOUTH

    Directory of Open Access Journals (Sweden)

    Putu Adriani Prayustika

    2016-12-01

    Full Text Available Word of Mouth telah diakui sebagai salah satu strategi komunikasi yang paling efektif dalam transisi informasi perusahaan kepada konsumen. Perusahaan memanfaatkan komunikasi word of mouth untuk kepentingan pemasaran produk dan layanan. Namun, komunikasi WOM konvensional hanya efektif dalam batasan kontak sosial terbatas. Kemajuan teknologi informasi dan munculnya situs jaringan sosial online telah mengubah cara informasi ditransmisikan dan telah melampaui keterbatasan tradisional WOM. Komunikasi word of mouth dengan memanfaatkan teknologi ini sering disebut electronic word of mouth (eWOM, dimana komunikasi ini memanfaatkan media baru, seperti misalnya media sosial. Makalah ini akan membahas kajian literatur dari beberapa penelitian yang telah dilakukan sebelumnya dalam membandingkan efektivitas traditional word of mouth dan electronic word of mouth. Hasil penelitian menunjukkan bahwa secara umum dapat dikatakan dengan perkembangan teknologi seperti sekarang, eWOM jauh lebih efektif daripada traditional WOM.

  11. Pain Part 8: Burning Mouth Syndrome.

    Science.gov (United States)

    Beneng, Kiran; Renton, Tara

    2016-04-01

    Burning mouth syndrome (BMS) is a rare but impactful condition affecting mainly post-menopausal women resulting in constant pain and significant difficulty with eating, drinking and daily function. The aetiology of BMS remains an enigma. Recent evidence suggests it likely to be neuropathic in origin, the cause of which remains unknown. There is no cure for this condition and the unfortunate patients remain managed on a variety of neuropathic pain medication, salivary substitutes and other non-medical interventions that help the patient 'get through the day'. Some simple strategies can assist both clinician and patient to manage this debilitating condition. CPD/Clinical Relevance: The dental team will recognize patients presenting with burning mouth syndrome. They are difficult patients to manage and are often referred to secondary care and, ultimately, depend on their general medical practitioners for pain management.

  12. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  13. Development of Fetal Movement between 26 and 36 Weeks’ Gestation in Response to Vibro-acoustic Stimulation

    Directory of Open Access Journals (Sweden)

    Marybeth eGrant-Beuttler

    2011-12-01

    Full Text Available Background: Ultrasound observation of fetal movement has documented general trends in motor development and fetal age when motor response to stimulation is observed. Evaluation of fetal movement quality, in addition to specific motor activity, may improve documentation of motor development and highlight specific motor responses to stimulation. Aims: The aim of this investigation was to assess fetal movement at 26 and 36 weeks gestation during three conditions (baseline, immediate response to vibro-acoustic stimulation (VAS, and post-response. Design: A prospective, longitudinal design was utilized. Subjects: Twelve normally developing fetuses, 8 females and 4 males, were examined with continuous ultrasound imaging. Outcome measures: The Fetal Neurobehavioral Coding System (FENS was used to evaluate the quality of motor activity during 10-second epochs over the three conditions. Results: Seventy-five percent of the fetuses at the 26 week assessment and 100% of the fetuses at the 36 week assessment responded with movement immediately following stimulation. Significant differences in head, fetal breathing, general, limb, and mouthing movements were detected between the 26 week and 36 week assessments. Movement differences between conditions were detected in head, fetal breathing, limb, and mouthing movements. Conclusions: Smoother and more complex movement was observed with fetal maturation. Following VAS stimulation, an immediate increase of large, jerky movements suggest instability in fetal capabilities. Fetal movement quality changes over gestation may reflect sensorimotor synaptogenesis in the central nervous system, while observation of immature movement patterns following VAS stimulation may reflect movement pattern instability.

  14. Speed of human tooth movement in growers and non-growers: Selection of applied stress matters.

    Science.gov (United States)

    Iwasaki, L R; Liu, Y; Liu, H; Nickel, J C

    2017-06-01

    To test that the speed of tooth translation is not affected by stress magnitude and growth status. Advanced Education Orthodontic clinics at the Universities of Nebraska Medical Center and Missouri-Kansas City. Forty-six consenting subjects with orthodontic treatment plans involving maxillary first premolar extractions. This randomized split-mouth study used segmental mechanics with definitive posterior anchorage and individual vertical-loop maxillary canine retraction appliances and measured three-dimensional tooth movements. Height and cephalometric superimposition changes determined growing (G) and non-growing (NG) subjects. Subjects were appointed for 9-11 visits over 84 days for maxillary dental impressions to measure three-dimensional tooth movement and to ensure retraction forces were continuously applied via calibrated nitinol coil springs. Springs were custom selected to apply two different stresses of 4, 13, 26, 52 or 78 kPa to maxillary canines in each subject. Statistical analyses (α=0.050) included ANOVA, effect size (partial η 2 ) and Tukey's Honest Significant Difference (HSD) and two-group t tests. Distolateral translation speeds were 0.034±0.015, 0.047±0.019, 0.066±0.025, 0.068±0.016 and 0.079±0.030 mm/d for 4, 13, 26, 52 and 78 kPa, respectively. Stress significantly affected speed and partial η 2 =0.376. Overall, more distopalatal rotation was shown by teeth moved by 78 kPa (18.03±9.50º) compared to other stresses (3.86±6.83º), and speeds were significantly higher (P=.001) in G (0.062±0.026 mm/d) than NG subjects (0.041±0.019 mm/d). Stress magnitude and growth status significantly affected the speed of tooth translation. Optimal applied stresses were 26-52 kPa, and overall speeds were 1.5-fold faster in G compared to NG subjects. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. The consummatory origins of visually guided reaching in human infants: a dynamic integration of whole-body and upper-limb movements.

    Science.gov (United States)

    Foroud, Afra; Whishaw, Ian Q

    2012-06-01

    Reaching-to-eat (skilled reaching) is a natural behaviour that involves reaching for, grasping and withdrawing a target to be placed into the mouth for eating. It is an action performed daily by adults and is among the first complex behaviours to develop in infants. During development, visually guided reaching becomes increasingly refined to the point that grasping of small objects with precision grips of the digits occurs at about one year of age. Integration of the hand, upper-limbs, and whole body are required for successful reaching, but the ontogeny of this integration has not been described. The present longitudinal study used Laban Movement Analysis, a behavioural descriptive method, to investigate the developmental progression of the use and integration of axial, proximal, and distal movements performed during visually guided reaching. Four infants (from 7 to 40 weeks age) were presented with graspable objects (toys or food items). The first prereaching stage was associated with activation of mouth, limb, and hand movements to a visually presented target. Next, reaching attempts consisted of first, the advancement of the head with an opening mouth and then with the head, trunk and opening mouth. Eventually, the axial movements gave way to the refined action of one upper-limb supported by axial adjustments. These findings are discussed in relation to the biological objective of reaching, the evolutionary origins of reaching, and the decomposition of reaching after neurological injury. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Motor laterality as an indicator of speech laterality.

    Science.gov (United States)

    Flowers, Kenneth A; Hudson, John M

    2013-03-01

    The determination of speech laterality, especially where it is anomalous, is both a theoretical issue and a practical problem for brain surgery. Handedness is commonly thought to be related to speech representation, but exactly how is not clearly understood. This investigation analyzed handedness by preference rating and performance on a reliable task of motor laterality in 34 patients undergoing a Wada test, to see if they could provide an indicator of speech laterality. Hand usage preference ratings divided patients into left, right, and mixed in preference. Between-hand differences in movement time on a pegboard task determined motor laterality. Results were correlated (χ2) with speech representation as determined by a standard Wada test. It was found that patients whose between-hand difference in speed on the motor task was small or inconsistent were the ones whose Wada test speech representation was likely to be ambiguous or anomalous, whereas all those with a consistently large between-hand difference showed clear unilateral speech representation in the hemisphere controlling the better hand (χ2 = 10.45, df = 1, p laterality are related where they both involve a central control of motor output sequencing and that a measure of that aspect of the former will indicate the likely representation of the latter. A between-hand measure of motor laterality based on such a measure may indicate the possibility of anomalous speech representation. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  18. Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders

    CERN Document Server

    Baghai-Ravary, Ladan

    2013-01-01

    Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders provides a survey of methods designed to aid clinicians in the diagnosis and monitoring of speech disorders such as dysarthria and dyspraxia, with an emphasis on the signal processing techniques, statistical validity of the results presented in the literature, and the appropriateness of methods that do not require specialized equipment, rigorously controlled recording procedures or highly skilled personnel to interpret results. Such techniques offer the promise of a simple and cost-effective, yet objective, assessment of a range of medical conditions, which would be of great value to clinicians. The ideal scenario would begin with the collection of examples of the clients’ speech, either over the phone or using portable recording devices operated by non-specialist nursing staff. The recordings could then be analyzed initially to aid diagnosis of conditions, and subsequently to monitor the clients’ progress and res...

  19. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  20. Joint Dictionary Learning-Based Non-Negative Matrix Factorization for Voice Conversion to Improve Speech Intelligibility After Oral Surgery.

    Science.gov (United States)

    Fu, Szu-Wei; Li, Pei-Chun; Lai, Ying-Hui; Yang, Cheng-Chien; Hsieh, Li-Chun; Tsao, Yu

    2017-11-01

    Objective: This paper focuses on machine learning based voice conversion (VC) techniques for improving the speech intelligibility of surgical patients who have had parts of their articulators removed. Because of the removal of parts of the articulator, a patient's speech may be distorted and difficult to understand. To overcome this problem, VC methods can be applied to convert the distorted speech such that it is clear and more intelligible. To design an effective VC method, two key points must be considered: 1) the amount of training data may be limited (because speaking for a long time is usually difficult for postoperative patients); 2) rapid conversion is desirable (for better communication). Methods: We propose a novel joint dictionary learning based non-negative matrix factorization (JD-NMF) algorithm. Compared to conventional VC techniques, JD-NMF can perform VC efficiently and effectively with only a small amount of training data. Results: The experimental results demonstrate that the proposed JD-NMF method not only achieves notably higher short-time objective intelligibility (STOI) scores (a standardized objective intelligibility evaluation metric) than those obtained using the original unconverted speech but is also significantly more efficient and effective than a conventional exemplar-based NMF VC method. Conclusion: The proposed JD-NMF method may outperform the state-of-the-art exemplar-based NMF VC method in terms of STOI scores under the desired scenario. Significance: We confirmed the advantages of the proposed joint training criterion for the NMF-based VC. Moreover, we verified that the proposed JD-NMF can effectively improve the speech intelligibility scores of oral surgery patients. Objective: This paper focuses on machine learning based voice conversion (VC) techniques for improving the speech intelligibility of surgical patients who have had parts of their articulators removed. Because of the removal of parts of the articulator, a patient

  1. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  2. Speech to Text Software Evaluation Report

    CERN Document Server

    Martins Santo, Ana Luisa

    2017-01-01

    This document compares out-of-box performance of three commercially available speech recognition software: Vocapia VoxSigma TM , Google Cloud Speech, and Lime- craft Transcriber. It is defined a set of evaluation criteria and test methods for speech recognition softwares. The evaluation of these softwares in noisy environments are also included for the testing purposes. Recognition accuracy was compared using noisy environments and languages. Testing in ”ideal” non-noisy environment of a quiet room has been also performed for comparison.

  3. Mobile communication jacket for people with severe speech impairment.

    Science.gov (United States)

    Lampe, Renée; Blumenstein, Tobias; Turova, Varvara; Alves-Pinto, Ana

    2018-04-01

    Cerebral palsy is a movement disorder caused by damage to motor control areas of the developing brain during early childhood. Motor disorders can also affect the ability to produce clear speech and to communicate. The aim of this study was to develop and to test a prototype of an assistive tool with an embedded mobile communication device to support patients with severe speech impairments. A prototype was developed by equipping a cycling jacket with a display, a small keyboard, a LED and an alarm system, all controlled by a microcontroller. Functionality of the prototype was tested in six participants (aged 7-20 years) with cerebral palsy and global developmental disorder and three healthy persons. A patient questionnaire consisting of seven items was used as an evaluation tool. A working prototype of the communication jacket was developed and tested. The questionnaire elicited positive responses from participants. Improvements to correct revealed weaknesses were proposed. Enhancements like voice output of pre-selected phrases and enlarged display were implemented. Integration in a jacket makes the system mobile and continuously available to the user. The communication jacket may be of great benefit to patients with motor and speech impairments. Implications for Rehabilitation The communication jacket developed can be easily used by people with movement and speech impairment. All technical components are integrated in a garment and do not have to be held with the hands or transported separately. The system is adaptable to individual use. Both expected and unexpected events can be dealt with, which contributes to the quality of life and self-fulfilment.

  4. Effect of working side interferences on mandibular movement in bruxers and non-bruxers.

    Science.gov (United States)

    Shiau, Y Y; Syu, J Z

    1995-02-01

    The effect of working interference on 13 bruxers and 14 non-bruxers was studied by applying a metal overlay on the buccal cusps of the adjacent upper premolar and molar. The pattern and velocity of cyclic movement during gum chewing before and after overlay insertion were observed. EMG of the temporalis and masseter muscles were recorded bilaterally during the chewing movement. It was found that after insertion, one of the non-bruxers complained of pain in the muscles, while such a complaint was not found in bruxers. Bruxing habit was reported to be less or eliminated in 44% of the bruxers, but no non-bruxers became bruxers. The closing velocity was more often decreased immediately after overlay insertion, and the closing path near the occlusal phase was significantly narrower, with patterns of over-extension and avoidance before reaching the occlusal phase. The delayed effects were a more vertically oriented chewing cycle without over-extended closing movement, and an unretarded chewing velocity. It was concluded that within the experimental period a working side interference was tolerable in most of the subjects studied with or without a bruxing habit.

  5. Establishing a basic speech repertoire without using NSOME: means, motive, and opportunity.

    Science.gov (United States)

    Davis, Barbara; Velleman, Shelley

    2008-11-01

    Children who are performing at a prelinguistic level of vocal communication present unique issues related to successful intervention relative to the general population of children with speech disorders. These children do not consistently use meaning-based vocalizations to communicate with those around them. General goals for this group of children include stimulating more mature vocalization types and connecting these vocalizations to meanings that can be used to communicate consistently with persons in their environment. We propose a means, motive, and opportunity conceptual framework for assessing and intervening with these children. This framework is centered on stimulation of meaningful vocalizations for functional communication. It is based on a broad body of literature describing the nature of early language development. In contrast, nonspeech oral motor exercise (NSOME) protocols require decontextualized practice of repetitive nonspeech movements that are not related to functional communication with respect to means, motive, or opportunity for communicating. Successful intervention with NSOME activities requires adoption of the concept that the child, operating at a prelinguistic communication level, will generalize from repetitive nonspeech movements that are not intended to communicate with anyone to speech-based movements that will be intelligible enough to allow responsiveness to the child's wants and needs from people in the environment. No evidence from the research literature on the course of speech and language acquisition suggests that this conceptualization is valid.

  6. In good company? Perception of movement synchrony of a non-anthropomorphic robot.

    Science.gov (United States)

    Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin

    2015-01-01

    Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot's likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants' perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot.

  7. In good company? Perception of movement synchrony of a non-anthropomorphic robot.

    Directory of Open Access Journals (Sweden)

    Hagen Lehmann

    Full Text Available Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot's likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a the powerful role that robot movements in general can have on participants' perception of the robot, and b that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot.

  8. Reduction of Non-stationary Noise using a Non-negative Latent Variable Decomposition

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Larsen, Jan

    2008-01-01

    We present a method for suppression of non-stationary noise in single channel recordings of speech. The method is based on a non-negative latent variable decomposition model for the speech and noise signals, learned directly from a noisy mixture. In non-speech regions an over complete basis...... is learned for the noise that is then used to jointly estimate the speech and the noise from the mixture. We compare the method to the classical spectral subtraction approach, where the noise spectrum is estimated as the average over non-speech frames. The proposed method significantly outperforms...

  9. Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.

    Science.gov (United States)

    Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana

    2017-08-09

    The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep. SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech

  10. Serotype Specificity of Antibodies against Foot-and-Mouth Disease Virus in Cattle in Selected Districts in Uganda

    DEFF Research Database (Denmark)

    Mwiine, F.N.; Ayebazibwe, C.; Olaho-Mukani, W.

    2010-01-01

    Uganda had an unusually large number of foot-and-mouth disease (FMD) outbreaks in 2006, and all clinical reports were in cattle. A serological investigation was carried out to confirm circulating antibodies against foot-and-mouth disease virus (FMDV) by ELISA for antibodies against non-structural......Uganda had an unusually large number of foot-and-mouth disease (FMD) outbreaks in 2006, and all clinical reports were in cattle. A serological investigation was carried out to confirm circulating antibodies against foot-and-mouth disease virus (FMDV) by ELISA for antibodies against non......-structural proteins and structural proteins. Three hundred and forty-nine cattle sera were collected from seven districts in Uganda, and 65% of these were found positive for antibodies against the non-structural proteins of FMDV. A subset of these samples were analysed for serotype specificity of the identified...... antibodies. High prevalences of antibodies against non-structural proteins and structural proteins of FMDV serotype O were demonstrated in herds with typical visible clinical signs of FMD, while prevalences were low in herds without clinical signs of FMD. Antibody titres were higher against serotype O than...

  11. A common functional neural network for overt production of speech and gesture.

    Science.gov (United States)

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Understanding the power of word-of-mouth.

    Directory of Open Access Journals (Sweden)

    Suzana Z. Gildin

    2003-06-01

    Full Text Available Word-of-mouth has been considered one of the most powerful forms of communication in the market today. Understanding what makes word-of-mouth such a persuasive and powerful communication tool is important to organizations that intend to build strong relationships with consumers. For this reason, organizations are concerned about promoting positive word-of-mouth and retarding negative word-of-mouth, which can be harmful to the image of the company or a brand. This work focuses on the major aspects involving word-of-mouth communication. Recommendations to generate positive word-of-mouth and retard negative word-of-mouth are also highlighted.

  13. Characterization of Burning Mouth Syndrome in Patients with Parkinson's Disease.

    Science.gov (United States)

    Bonenfant, David; Rompré, Pierre H; Rei, Nathalie; Jodoin, Nicolas; Soland, Valerie Lynn; Rey, Veronica; Brefel-Courbon, Christine; Ory-Magne, Fabienne; Rascol, Olivier; Blanchet, Pierre J

    2016-01-01

    To determine the prevalence and characteristics of burning mouth syndrome (BMS) in a Parkinson's disease (PD) population through a self-administered, custom-made survey. A total of 218 surveys were collected during regular outpatient visits at two Movement Disorders Clinics in Montreal (Canada) and Toulouse (France) to gather information about pain experience, PD-related symptoms, and oral and general health. A neurologist confirmed the diagnosis of PD, drug treatment, Hoehn-Yahr stage, and Schwab & England Activity of Daily Living score. Data between groups were compared using the independent samples Mann-Whitney U test and two-sided exact Fisher test. Data from 203 surveys were analyzed. BMS was reported by eight subjects (seven females and one male), resulting in a prevalence of 4.0% (95% confidence interval [CI] = 2.1-7.8). Five participants with chronic nonburning oral pain were excluded. PD severity and levodopa equivalent daily dose did not differ between non-BMS and BMS participants. Mean poor oral health index was higher in BMS compared to non-BMS subjects (49.0 vs 32.2 points, P syndrome. This survey yielded a low prevalence of BMS in PD patients, indicating no strong link between the two conditions. An augmenting effect such as that resulting from drug treatment in restless legs syndrome or sensory neuropathy cannot be excluded.

  14. Speech enhancement on smartphone voice recording

    International Nuclear Information System (INIS)

    Atmaja, Bagus Tris; Farid, Mifta Nur; Arifianto, Dhany

    2016-01-01

    Speech enhancement is challenging task in audio signal processing to enhance the quality of targeted speech signal while suppress other noises. In the beginning, the speech enhancement algorithm growth rapidly from spectral subtraction, Wiener filtering, spectral amplitude MMSE estimator to Non-negative Matrix Factorization (NMF). Smartphone as revolutionary device now is being used in all aspect of life including journalism; personally and professionally. Although many smartphones have two microphones (main and rear) the only main microphone is widely used for voice recording. This is why the NMF algorithm widely used for this purpose of speech enhancement. This paper evaluate speech enhancement on smartphone voice recording by using some algorithms mentioned previously. We also extend the NMF algorithm to Kulback-Leibler NMF with supervised separation. The last algorithm shows improved result compared to others by spectrogram and PESQ score evaluation. (paper)

  15. Field investigation of Foot and Mouth Disease (FMD) virus infection ...

    African Journals Online (AJOL)

    Prof. Ogunji

    Foot and Mouth Disease Virus (FMDV) is a non-enveloped, single stranded RNA virus ... continents of Asia, Africa, and some regions in the South America. .... FCT = Federal Capital Territory; NE = North East, NC = North Central; NW =.

  16. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  17. Influence of musical training on understanding voiced and whispered speech in noise.

    Science.gov (United States)

    Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J

    2014-01-01

    This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.

  18. Phonetic perspectives on modelling information in the speech signal

    Indian Academy of Sciences (India)

    Centre for Music and Science, Faculty of Music, University of Cambridge,. Cambridge .... However, to develop systems that can han- .... 1.2a Phonemes are not clearly identifiable in movement or in the acoustic speech signal: As ..... while the speaker role-played the part of a mother at a child's athletics meeting where the.

  19. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  20. Live Speech Driven Head-and-Eye Motion Generators.

    Science.gov (United States)

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  1. A pragmatic evidence-based clinical management algorithm for burning mouth syndrome.

    Science.gov (United States)

    Kim, Yohanan; Yoo, Timothy; Han, Peter; Liu, Yuan; Inman, Jared C

    2018-04-01

    Burning mouth syndrome is a poorly understood disease process with no current standard of treatment. The goal of this article is to provide an evidence-based, practical, clinical algorithm as a guideline for the treatment of burning mouth syndrome. Using available evidence and clinical experience, a multi-step management algorithm was developed. A retrospective cohort study was then performed, following STROBE statement guidelines, comparing outcomes of patients who were managed using the algorithm and those who were managed without. Forty-seven patients were included in the study, with 21 (45%) managed using the algorithm and 26 (55%) managed without. The mean age overall was 60.4 ±16.5 years, and most patients (39, 83%) were female. Cohorts showed no statistical difference in age, sex, overall follow-up time, dysgeusia, geographic tongue, or psychiatric disorder; xerostomia, however, was significantly different, skewed toward the algorithm group. Significantly more non-algorithm patients did not continue care (69% vs. 29%, p =0.001). The odds ratio of not continuing care for the non-algorithm group compared to the algorithm group was 5.6 [1.6, 19.8]. Improvement in pain was significantly more likely in the algorithm group ( p =0.001), with an odds ratio of 27.5 [3.1, 242.0]. We present a basic clinical management algorithm for burning mouth syndrome which may increase the likelihood of pain improvement and patient follow-up. Key words: Burning mouth syndrome, burning tongue, glossodynia, oral pain, oral burning, therapy, treatment.

  2. Burning mouth syndrome

    OpenAIRE

    Zakrzewska, Joanna; Buchanan, John A. G.

    2016-01-01

    Burning mouth syndrome is a debilitating medical condition affecting nearly 1.3 million of Americans. Its common features include a burning painful sensation in the mouth, often associated with dysgeusia and xerostomia, despite normal salivation. Classically, symptoms are better in the morning, worsen during the day and typically subside at night. Its etiology is largely multifactorial, and associated medical conditions may include gastrointestinal, urogenital, psychiatric, neurologic and met...

  3. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Gesture facilitates the syntactic analysis of speech

    Directory of Open Access Journals (Sweden)

    Henning eHolle

    2012-03-01

    Full Text Available Recent research suggests that the brain routinely binds together information from gesture and speech. However, most of this research focused on the integration of representational gestures with the semantic content of speech. Much less is known about how other aspects of gesture, such as emphasis, influence the interpretation of the syntactic relations in a spoken message. Here, we investigated whether beat gestures alter which syntactic structure is assigned to ambiguous spoken German sentences. The P600 component of the Event Related Brain Potential indicated that the more complex syntactic structure is easier to process when the speaker emphasizes the subject of a sentence with a beat. Thus, a simple flick of the hand can change our interpretation of who has been doing what to whom in a spoken sentence. We conclude that gestures and speech are an integrated system. Unlike previous studies, which have shown that the brain effortlessly integrates semantic information from gesture and speech, our study is the first to demonstrate that this integration also occurs for syntactic information. Moreover, the effect appears to be gesture-specific and was not found for other stimuli that draw attention to certain parts of speech, including prosodic emphasis, or a moving visual stimulus with the same trajectory as the gesture. This suggests that only visual emphasis produced with a communicative intention in mind (that is, beat gestures influences language comprehension, but not a simple visual movement lacking such an intention.

  5. The Effects of Fluency Enhancing Conditions on Sensorimotor Control of Speech in Typically Fluent Speakers: An EEG Mu Rhythm Study

    Directory of Open Access Journals (Sweden)

    Tiffani Kittilstved

    2018-04-01

    Full Text Available Objective: To determine whether changes in sensorimotor control resulting from speaking conditions that induce fluency in people who stutter (PWS can be measured using electroencephalographic (EEG mu rhythms in neurotypical speakers.Methods: Non-stuttering (NS adults spoke in one control condition (solo speaking and four experimental conditions (choral speech, delayed auditory feedback (DAF, prolonged speech and pseudostuttering. Independent component analysis (ICA was used to identify sensorimotor μ components from EEG recordings. Time-frequency analyses measured μ-alpha (8–13 Hz and μ-beta (15–25 Hz event-related synchronization (ERS and desynchronization (ERD during each speech condition.Results: 19/24 participants contributed μ components. Relative to the control condition, the choral and DAF conditions elicited increases in μ-alpha ERD in the right hemisphere. In the pseudostuttering condition, increases in μ-beta ERD were observed in the left hemisphere. No differences were present between the prolonged speech and control conditions.Conclusions: Differences observed in the experimental conditions are thought to reflect sensorimotor control changes. Increases in right hemisphere μ-alpha ERD likely reflect increased reliance on auditory information, including auditory feedback, during the choral and DAF conditions. In the left hemisphere, increases in μ-beta ERD during pseudostuttering may have resulted from the different movement characteristics of this task compared with the solo speaking task. Relationships to findings in stuttering are discussed.Significance: Changes in sensorimotor control related feedforward and feedback control in fluency-enhancing speech manipulations can be measured using time-frequency decompositions of EEG μ rhythms in neurotypical speakers. This quiet, non-invasive, and temporally sensitive technique may be applied to learn more about normal sensorimotor control and fluency enhancement in PWS.

  6. Acquirement and enhancement of remote speech signals

    Science.gov (United States)

    Lü, Tao; Guo, Jin; Zhang, He-yong; Yan, Chun-hui; Wang, Can-jin

    2017-07-01

    To address the challenges of non-cooperative and remote acoustic detection, an all-fiber laser Doppler vibrometer (LDV) is established. The all-fiber LDV system can offer the advantages of smaller size, lightweight design and robust structure, hence it is a better fit for remote speech detection. In order to improve the performance and the efficiency of LDV for long-range hearing, the speech enhancement technology based on optimally modified log-spectral amplitude (OM-LSA) algorithm is used. The experimental results show that the comprehensible speech signals within the range of 150 m can be obtained by the proposed LDV. The signal-to-noise ratio ( SNR) and mean opinion score ( MOS) of the LDV speech signal can be increased by 100% and 27%, respectively, by using the speech enhancement technology. This all-fiber LDV, which combines the speech enhancement technology, can meet the practical demand in engineering.

  7. Technologies for the Study of Speech: Review and an Application

    Science.gov (United States)

    Babatsouli, Elena

    2015-01-01

    Technologies used for the study of speech are classified here into non-intrusive and intrusive. The paper informs on current non-intrusive technologies that are used for linguistic investigations of the speech signal, both phonological and phonetic. Providing a point of reference, the review covers existing technological advances in language…

  8. Brainstem Circuits that Control Mastication: Do They Have Anything to Say during Speech?

    Science.gov (United States)

    Lund, James P.; Kolta, Arlette

    2006-01-01

    Mastication results from the interaction of an intrinsic rhythmical neural pattern and sensory feedback from the mouth, muscles and joints. The pattern is matched to the physical characteristics of food, but also varies with age. There are large differences in masticatory movements among subjects. The intrinsic rhythmical pattern is generated by…

  9. Dry mouth and older people.

    Science.gov (United States)

    Thomson, W M

    2015-03-01

    Dry mouth is more common among older people than in any other age group. Appropriate definition and accurate measurement of dry mouth is critical for better understanding, monitoring and treatment of the condition. Xerostomia is the symptom(s) of dry mouth; it can be measured using methods ranging from single questions to multi-item summated rating scales. Low salivary flow (known as salivary gland hypofunction, or SGH) must be determined by measuring that flow. The relationship between SGH and xerostomia is not straightforward, but both conditions are common among older people, and they affect sufferers' day-to-day lives in important ways. The major risk factor for dry mouth is the taking of particular medications, and older people take more of those than any other age group, not only for symptomatic relief of various age-associated chronic diseases, but also in order to reduce the likelihood of complications which may arise from those conditions. The greater the number taken, the greater the associated anticholinergic burden, and the more likely it is that the individual will suffer from dry mouth. Since treating dry mouth is such a challenge for clinicians, there is a need for dentists, doctors and pharmacists to work together to prevent it occurring. © 2015 Australian Dental Association.

  10. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  11. Prevalence of malocclusion among mouth breathing children: do expectations meet reality?

    Science.gov (United States)

    Souki, Bernardo Q; Pimenta, Giovana B; Souki, Marcelo Q; Franco, Leticia P; Becker, Helena M G; Pinto, Jorge A

    2009-05-01

    The aim of this study was to report epidemiological data on the prevalence of malocclusion among a group of children, consecutively admitted at a referral mouth breathing otorhinolaryngological (ENT) center. We assessed the association between the severity of the obstruction by adenoids/tonsils hyperplasia or the presence of allergic rhinitis and the prevalence of class II malocclusion, anterior open bite and posterior crossbite. Cross-sectional, descriptive study, carried out at an Outpatient Clinic for Mouth-Breathers. Dental inter-arch relationship and nasal obstructive variables were diagnosed and the appropriate cross-tabulations were done. Four hundred and one patients were included. Mean age was 6 years and 6 months (S.D.: 2 years and 7 months), ranging from 2 to 12 years. All subjects were evaluated by otorhinolaryngologists to confirm mouth breathing. Adenoid/tonsil obstruction was detected in 71.8% of this sample, regardless of the presence of rhinitis. Allergic rhinitis alone was found in 18.7% of the children. Non-obstructive mouth breathing was diagnosed in 9.5% of this sample. Posterior crossbite was detected in almost 30% of the children during primary and mixed dentitions and 48% in permanent dentition. During mixed and permanent dentitions, anterior open bite and class II malocclusion were highly prevalent. More than 50% of the mouth breathing children carried a normal inter-arch relationship in the sagital, transversal and vertical planes. Univariate analysis showed no significant association between the type of the obstruction (adenoids/tonsils obstructive hyperplasia or the presence of allergic rhinitis) and malocclusions (class II, anterior open bite and posterior crossbite). The prevalence of posterior crossbite is higher in mouth breathing children than in the general population. During mixed and permanent dentitions, anterior open bite and class II malocclusion were more likely to be present in mouth breathers. Although more children showed

  12. The syntactic organization of pasta-eating and the structure of reach movements in the head-fixed mouse.

    Science.gov (United States)

    Whishaw, Ian Q; Faraji, Jamshid; Kuntz, Jessica R; Mirza Agha, Behroo; Metz, Gerlinde A S; Mohajerani, Majid H

    2017-09-08

    Mice are adept in the use of their hands for activities such as feeding, which has led to their use in investigations of the neural basis of skilled-movements. We describe the syntactic organization of pasta-eating and the structure of hand movements used for pasta manipulation by the head-fixed mouse. An ethogram of mice consuming pieces of spaghetti reveals that they eat in bite/chew bouts. A bout begins with pasta lifted to the mouth and then manipulated with hand movements into a preferred orientation for biting. Manipulation involves many hand release-reach movements, each with a similar structure. A hand is advanced from a digit closed and flexed (collect) position to a digit extended and open position (overgrasp) and then to a digit closed and flexed (grasp) position. Reach distance, hand shaping, and grasp patterns featuring precision grasps or whole hand grasps are related. To bite, mice display hand preference and asymmetric grasps; one hand (guide grasp) directs food into the mouth and the other stabilizes the pasta for biting. When chewing after biting, the hands hold the pasta in a symmetric resting position. Pasta-eating is organized and features structured hand movements and so lends itself to the neural investigation of skilled-movements.

  13. Optimizing acoustical conditions for speech intelligibility in classrooms

    Science.gov (United States)

    Yang, Wonyoung

    High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with

  14. Passive Stretch Versus Active Stretch on Intervertebral Movement in Non - Specific Neck Pain

    International Nuclear Information System (INIS)

    Abd El - Aziz, A.H.; Amin, D.I.; Moustafa, I.

    2016-01-01

    Neck pain is one of the most common and painful musculoskeletal conditions. Point prevalence ranges from 6% to 22% and up to 38% of the elderly population, while lifetime prevalence ranges from 14,2% to 71%. Up till now no randomized study showed the effect between controversy of active and passive stretch on intervertebral movement. The purpose: the current study was to investigate the effect of the passive and active stretch on intervertebral movement in non - specific neck pain. Material and methods: Forty five subjects from both sexes with age range between 18 and 30 years and assigned in three groups, group I (15) received active stretch, ultrasound and TENS. Group II (15) received passive stretch, ultrasound and TENS. Group III (15) received ultrasound and TENS. The radiological assessment was used to measure rotational and translational movement of intervertebral movement before and after treatment. Results: MANOVA test was used for radiological assessment before and after treatment there was significant increase in intervertebral movement in group I as p value =0.0001. Conclusion: active stretch had a effect in increasing the intervertebral movement compared to the passive stretch

  15. Home range use and movement patterns of non-native feral goats in a tropical island montane dry landscape.

    Science.gov (United States)

    Chynoweth, Mark W; Lepczyk, Christopher A; Litton, Creighton M; Hess, Steven C; Kellner, James R; Cordell, Susan

    2015-01-01

    Advances in wildlife telemetry and remote sensing technology facilitate studies of broad-scale movements of ungulates in relation to phenological shifts in vegetation. In tropical island dry landscapes, home range use and movements of non-native feral goats (Capra hircus) are largely unknown, yet this information is important to help guide the conservation and restoration of some of the world's most critically endangered ecosystems. We hypothesized that feral goats would respond to resource pulses in vegetation by traveling to areas of recent green-up. To address this hypothesis, we fitted six male and seven female feral goats with Global Positioning System (GPS) collars equipped with an Argos satellite upload link to examine goat movements in relation to the plant phenology using the Normalized Difference Vegetation Index (NDVI). Movement patterns of 50% of males and 40% of females suggested conditional movement between non-overlapping home ranges throughout the year. A shift in NDVI values corresponded with movement between primary and secondary ranges of goats that exhibited long-distance movement, suggesting that vegetation phenology as captured by NDVI is a good indicator of the habitat and movement patterns of feral goats in tropical island dry landscapes. In the context of conservation and restoration of tropical island landscapes, the results of our study identify how non-native feral goats use resources across a broad landscape to sustain their populations and facilitate invasion of native plant communities.

  16. Home range use and movement patterns of non-native feral goats in a tropical island montane dry landscape

    Science.gov (United States)

    Chynoweth, Mark W.; Lepczyk, Christopher A.; Litton, Creighton M.; Hess, Steve; Kellner, James; Cordell, Susan

    2015-01-01

    Advances in wildlife telemetry and remote sensing technology facilitate studies of broad-scale movements of ungulates in relation to phenological shifts in vegetation. In tropical island dry landscapes, home range use and movements of non-native feral goats (Capra hircus) are largely unknown, yet this information is important to help guide the conservation and restoration of some of the world’s most critically endangered ecosystems. We hypothesized that feral goats would respond to resource pulses in vegetation by traveling to areas of recent green-up. To address this hypothesis, we fitted six male and seven female feral goats with Global Positioning System (GPS) collars equipped with an Argos satellite upload link to examine goat movements in relation to the plant phenology using the Normalized Difference Vegetation Index (NDVI). Movement patterns of 50% of males and 40% of females suggested conditional movement between non-overlapping home ranges throughout the year. A shift in NDVI values corresponded with movement between primary and secondary ranges of goats that exhibited long-distance movement, suggesting that vegetation phenology as captured by NDVI is a good indicator of the habitat and movement patterns of feral goats in tropical island dry landscapes. In the context of conservation and restoration of tropical island landscapes, the results of our study identify how non-native feral goats use resources across a broad landscape to sustain their populations and facilitate invasion of native plant communities.

  17. Governing GMOs: The (Counter Movement for Mandatory and Voluntary Non-GMO Labels

    Directory of Open Access Journals (Sweden)

    Carmen Bain

    2014-12-01

    Full Text Available Since 2012 the anti-GMO (genetically modified organism movement has gained significant grassroots momentum in its efforts to require mandatory GMO food labels through state-level ballot and legislative efforts. Major food and agriculture corporations are opposed to mandatory GMO labels and have successfully defeated most of these initiatives. Nevertheless, these battles have garnered significant media attention and re-energized the debate over GMO crops and foods. In this paper, we argue that one of the most significant outcomes of this fight is efforts by food retailers and value-based food companies to implement voluntary non-GMO labels and brands. We draw on the governance and political consumerism literature to explore (counter movement efforts for mandatory labels and how these efforts are being institutionalized through private voluntary governance institutions. Our assessment is based on in-depth, semi-structured interviews with key informants from consumer and environmental organizations, agriculture and biotech companies, and government regulatory agencies, as well as a content analysis of food industry websites. A growing number of food retailers recognize the reputational and economic value that new niche markets for non-GMO foods can offer, while the anti-GMO movement views these efforts as a step in the direction of mandatory GMO labels. We conclude that voluntary labels may act to settle the labeling debate by mollifying agri-food industry concerns about mandatory labeling and meeting the desire of political consumers for greater choice and transparency but without addressing the broader social and environmental sustainability concerns that drives the anti-GMO movement in the first place.

  18. From Human to Artificial Mouth, From Basics to Results

    International Nuclear Information System (INIS)

    Mielle, Patrick; Tarrega, Amparo; Salles, Christian; Gorria, Patrick; Liodenot, Jean Jacques; Liaboeuf, Joeel; Andrejewski, Jean-Luc

    2009-01-01

    Sensory perception of the flavor release during the eating of a food piece is highly dependent upon mouth parameters. Major limitations have been reported during in-vivo flavor release studies, such as marked intra- and inter-individual variability. To overcome these limitations, a chewing simulator has been developed to mimic the human mastication of food samples. The device faithfully reproduces most of the functions of the human mouth. The active cell comprises several mobile parts that can accurately reproduce shear and compression strengths and tongue functions in real-time, according to data previously collected in-vivo. The mechanical functionalities of the system were validated using peanuts, with a fair agreement with the human data. Flavor release can be monitored on-line using either API-MS or chemical sensors, or off-line using HPLC for non-volatile compounds. Couplings with API-MS detectors have shown differences in the kinetics of flavour release, as a function of the cheeses composition. Data were also collected for the analysis of taste compounds released during the human chewing but are not available yet for the Artificial Mouth.

  19. Oral and Hand Movement Speeds Are Associated with Expressive Language Ability in Children with Speech Sound Disorder

    Science.gov (United States)

    Peter, Beate

    2012-01-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…

  20. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  1. A Comparison of the Interpersonal Orientations of Speech Anxious and Non Speech Anxious Students.

    Science.gov (United States)

    Ambler, Bob

    A special section of a public speaking class at the Universtiy of Tennessee was developed in the spring of 1977 for speech anxious students. The course was designed to incorporate the basic spirit of the regular classes and to provide special training in techniques for reducing nervousness about speaking and in methods for coping with the…

  2. Dry mouth during cancer treatment

    Science.gov (United States)

    ... gov/ency/patientinstructions/000032.htm Dry mouth during cancer treatment To use the sharing features on this page, please enable JavaScript. Some cancer treatments and medicines can cause dry mouth. Symptoms you ...

  3. Both lexical and non-lexical characters are processed during saccadic eye movements.

    Directory of Open Access Journals (Sweden)

    Hao Zhang

    Full Text Available On average our eyes make 3-5 saccadic movements per second when we read, although their neural mechanism is still unclear. It is generally thought that saccades help redirect the retinal fovea to specific characters and words but that actual discrimination of information only occurs during periods of fixation. Indeed, it has been proposed that there is active and selective suppression of information processing during saccades to avoid experience of blurring due to the high-speed movement. Here, using a paradigm where a string of either lexical (Chinese or non-lexical (alphabetic characters are triggered by saccadic eye movements, we show that subjects can discriminate both while making saccadic eye movement. Moreover, discrimination accuracy is significantly better for characters scanned during the saccadic movement to a fixation point than those not scanned beyond it. Our results showed that character information can be processed during the saccade, therefore saccades during reading not only function to redirect the fovea to fixate the next character or word but allow pre-processing of information from the ones adjacent to the fixation locations to help target the next most salient one. In this way saccades can not only promote continuity in reading words but also actively facilitate reading comprehension.

  4. Dansk Rapport: Work Stream 3: Fokus gruppe interviews:Militante fra den anden side Side: Demokratiske kræfter mod hate-speech i Danmark

    OpenAIRE

    Siim, Birte; Larsen, Jeppe Fuglsang; Meret, Susi

    2014-01-01

    The purpose of this national report is to analyze the role of social movements/organizations/initiatives in the struggle against racism, discrimination, hate-speech and behavior in Denmark. The first part includes a brief summary of the Danish political landscape for the democratic anti-bodies. This is followed by a mapping of voluntary movements/groups/organizations comparing the diverse policies and strategies towards racism, discrimination and hate-speech and behavior as well as the kind o...

  5. Relative Contributions of the Dorsal vs. Ventral Speech Streams to Speech Perception are Context Dependent: a lesion study

    Directory of Open Access Journals (Sweden)

    Corianne Rogalsky

    2014-04-01

    Full Text Available The neural basis of speech perception has been debated for over a century. While it is generally agreed that the superior temporal lobes are critical for the perceptual analysis of speech, a major current topic is whether the motor system contributes to speech perception, with several conflicting findings attested. In a dorsal-ventral speech stream framework (Hickok & Poeppel 2007, this debate is essentially about the roles of the dorsal versus ventral speech processing streams. A major roadblock in characterizing the neuroanatomy of speech perception is task-specific effects. For example, much of the evidence for dorsal stream involvement comes from syllable discrimination type tasks, which have been found to behaviorally doubly dissociate from auditory comprehension tasks (Baker et al. 1981. Discrimination task deficits could be a result of difficulty perceiving the sounds themselves, which is the typical assumption, or it could be a result of failures in temporary maintenance of the sensory traces, or the comparison and/or the decision process. Similar complications arise in perceiving sentences: the extent of inferior frontal (i.e. dorsal stream activation during listening to sentences increases as a function of increased task demands (Love et al. 2006. Another complication is the stimulus: much evidence for dorsal stream involvement uses speech samples lacking semantic context (CVs, non-words. The present study addresses these issues in a large-scale lesion-symptom mapping study. 158 patients with focal cerebral lesions from the Mutli-site Aphasia Research Consortium underwent a structural MRI or CT scan, as well as an extensive psycholinguistic battery. Voxel-based lesion symptom mapping was used to compare the neuroanatomy involved in the following speech perception tasks with varying phonological, semantic, and task loads: (i two discrimination tasks of syllables (non-words and words, respectively, (ii two auditory comprehension tasks

  6. Imitation of contrastive lexical stress in children with speech delay

    Science.gov (United States)

    Vick, Jennell C.; Moore, Christopher A.

    2005-09-01

    This study examined the relationship between acoustic correlates of stress in trochaic (strong-weak), spondaic (strong-strong), and iambic (weak-strong) nonword bisyllables produced by children (30-50) with normal speech acquisition and children with speech delay. Ratios comparing the acoustic measures (vowel duration, rms, and f0) of the first syllable to the second syllable were calculated to evaluate the extent to which each phonetic parameter was used to mark stress. In addition, a calculation of the variability of jaw movement in each bisyllable was made. Finally, perceptual judgments of accuracy of stress production were made. Analysis of perceptual judgments indicated a robust difference between groups: While both groups of children produced errors in imitating the contrastive lexical stress models (~40%), the children with normal speech acquisition tended to produce trochaic forms in substitution for other stress types, whereas children with speech delay showed no preference for trochees. The relationship between segmental acoustic parameters, kinematic variability, and the ratings of stress by trained listeners will be presented.

  7. Octopuses use a human-like strategy to control precise point-to-point arm movements.

    Science.gov (United States)

    Sumbre, Germán; Fiorito, Graziano; Flash, Tamar; Hochner, Binyamin

    2006-04-18

    One of the key problems in motor control is mastering or reducing the number of degrees of freedom (DOFs) through coordination. This problem is especially prominent with hyper-redundant limbs such as the extremely flexible arm of the octopus. Several strategies for simplifying these control problems have been suggested for human point-to-point arm movements. Despite the evolutionary gap and morphological differences, humans and octopuses evolved similar strategies when fetching food to the mouth. To achieve this precise point-to-point-task, octopus arms generate a quasi-articulated structure based on three dynamic joints. A rotational movement around these joints brings the object to the mouth . Here, we describe a peripheral neural mechanism-two waves of muscle activation propagate toward each other, and their collision point sets the medial-joint location. This is a remarkably simple mechanism for adjusting the length of the segments according to where the object is grasped. Furthermore, similar to certain human arm movements, kinematic invariants were observed at the joint level rather than at the end-effector level, suggesting intrinsic control coordination. The evolutionary convergence to similar geometrical and kinematic features suggests that a kinematically constrained articulated limb controlled at the level of joint space is the optimal solution for precise point-to-point movements.

  8. Evidence that non-dreamers do dream: a REM sleep behaviour disorder model.

    Science.gov (United States)

    Herlin, Bastien; Leu-Semenescu, Smaranda; Chaumereuil, Charlotte; Arnulf, Isabelle

    2015-12-01

    To determine whether non-dreamers do not produce dreams or do not recall them, subjects were identified with no dream recall with dreamlike behaviours during rapid eye movement sleep behaviour disorder, which is typically characterised by dream-enacting behaviours congruent with sleep mentation. All consecutive patients with idiopathic rapid eye movement sleep behaviour disorder or rapid eye movement sleep behaviour disorder associated with Parkinson's disease who underwent a video-polysomnography were interviewed regarding the presence or absence of dream recall, retrospectively or upon spontaneous arousals. The patients with no dream recall for at least 10 years, and never-ever recallers were compared with dream recallers with rapid eye movement sleep behaviour disorder regarding their clinical, cognitive and sleep features. Of the 289 patients with rapid eye movement sleep behaviour disorder, eight (2.8%) patients had no dream recall, including four (1.4%) patients who had never ever recalled dreams, and four patients who had no dream recall for 10-56 years. All non-recallers exhibited, daily or almost nightly, several complex, scenic and dreamlike behaviours and speeches, which were also observed during rapid eye movement sleep on video-polysomnography (arguing, fighting and speaking). They did not recall a dream following sudden awakenings from rapid eye movement sleep. These eight non-recallers with rapid eye movement sleep behaviour disorder did not differ in terms of cognition, clinical, treatment or sleep measures from the 17 dreamers with rapid eye movement sleep behaviour disorder matched for age, sex and disease. The scenic dreamlike behaviours reported and observed during rapid eye movement sleep in the rare non-recallers with rapid eye movement sleep behaviour disorder (even in the never-ever recallers) provide strong evidence that non-recallers produce dreams, but do not recall them. Rapid eye movement sleep behaviour disorder provides a new model to

  9. Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.

    Science.gov (United States)

    Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc

    2017-09-01

    Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.

  10. Maximum opening of the mouth by mouth prop during dental procedures increases the risk of upper airway constriction

    Directory of Open Access Journals (Sweden)

    Hiroshi Ito

    2010-05-01

    Full Text Available Hiroshi Ito1, Hiroyoshi Kawaai1, Shinya Yamazaki1, Yosuke Suzuki21Division of Systemic Management, Department of Oral Function, 2Division of Radiology and Diagnosis, Department of Medical Sciences, Ohu University, Post Graduate School of Dentistry, Koriyama City, Fukushima Prefecture, JapanAbstract: From a retrospective evaluation of data on accidents and deaths during dental procedures, it has been shown that several patients who refused dental treatment died of asphyxia during dental procedures. We speculated that forcible maximum opening of the mouth by using a mouth prop triggers this asphyxia by affecting the upper airway. Therefore, we assessed the morphological changes of the upper airway following maximal opening of the mouth. In 13 healthy adult volunteers, the sagittal diameter of the upper airway on lateral cephalogram was measured between the two conditions; closed mouth and maximally open mouth. The dyspnea in each state was evaluated by a visual analog scale. In one subject, a computed tomograph (CT was taken to assess the three-dimensional changes in the upper airway. A significant difference was detected in the mean sagittal diameter of the upper airway following use of the prop (closed mouth: 18.5 ± 3.8 mm, maximally open mouth: 10.4 ± 3.0 mm. All subjects indicated upper airway constriction and significant dyspnea when their mouth was maximally open. Although a CT scan indicated upper airway constriction when the mouth was maximally open, muscular compensation was admitted. Our results further indicate that the maximal opening of the mouth narrows the upper airway diameter and leads to dyspnea. The use of a prop for the patient who has communication problems or poor neuromuscular function can lead to asphyxia. When the prop is used for patient refusal in dentistry, the respiratory condition should be monitored strictly, and it should be kept in mind that the “sniffing position” is effective for avoiding upper airway

  11. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  12. Quantification and Systematic Characterization of Stuttering-Like Disfluencies in Acquired Apraxia of Speech.

    Science.gov (United States)

    Bailey, Dallin J; Blomgren, Michael; DeLong, Catharine; Berggren, Kiera; Wambaugh, Julie L

    2017-06-22

    The purpose of this article is to quantify and describe stuttering-like disfluencies in speakers with acquired apraxia of speech (AOS), utilizing the Lidcombe Behavioural Data Language (LBDL). Additional purposes include measuring test-retest reliability and examining the effect of speech sample type on disfluency rates. Two types of speech samples were elicited from 20 persons with AOS and aphasia: repetition of mono- and multisyllabic words from a protocol for assessing AOS (Duffy, 2013), and connected speech tasks (Nicholas & Brookshire, 1993). Sampling was repeated at 1 and 4 weeks following initial sampling. Stuttering-like disfluencies were coded using the LBDL, which is a taxonomy that focuses on motoric aspects of stuttering. Disfluency rates ranged from 0% to 13.1% for the connected speech task and from 0% to 17% for the word repetition task. There was no significant effect of speech sampling time on disfluency rate in the connected speech task, but there was a significant effect of time for the word repetition task. There was no significant effect of speech sample type. Speakers demonstrated both major types of stuttering-like disfluencies as categorized by the LBDL (fixed postures and repeated movements). Connected speech samples yielded more reliable tallies over repeated measurements. Suggestions are made for modifying the LBDL for use in AOS in order to further add to systematic descriptions of motoric disfluencies in this disorder.

  13. Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces.

    Directory of Open Access Journals (Sweden)

    Florent Bocquelet

    2016-11-01

    Full Text Available Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (tongue, jaw, velum, and lips into intelligible speech. The articulatory-to-acoustic mapping is performed using a deep neural network (DNN trained on electromagnetic articulography (EMA data recorded on a reference speaker synchronously with the produced speech signal. This DNN is then used in both offline and online modes to map the position of sensors glued on different speech articulators into acoustic parameters that are further converted into an audio signal using a vocoder. In offline mode, highly intelligible speech could be obtained as assessed by perceptual evaluation performed by 12 listeners. Then, to anticipate future BCI applications, we further assessed the real-time control of the synthesizer by both the reference speaker and new speakers, in a closed-loop paradigm using EMA data recorded in real time. A short calibration period was used to compensate for differences in sensor positions and articulatory differences between new speakers and the reference speaker. We found that real-time synthesis of vowels and consonants was possible with good intelligibility. In conclusion, these results open to future speech BCI applications using such articulatory-based speech synthesizer.

  14. Effects of Early Bilingual Experience with a Tone and a Non-Tone Language on Speech-Music Integration.

    Directory of Open Access Journals (Sweden)

    Salomi S Asaridou

    Full Text Available We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch. We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval or phonologically (based on the identity of the sung vowel. We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

  15. Burning mouth syndrome and associated factors: A case-control retrospective study.

    Science.gov (United States)

    Chimenos-Küstner, Eduardo; de Luca-Monasterios, Fiorella; Schemel-Suárez, Mayra; Rodríguez de Rivera-Campillo, María E; Pérez-Pérez, Alejandro M; López-López, José

    2017-02-23

    Burning mouth syndrome (BMS) can be defined as burning pain or dysesthesia on the tongue and/or other sites of the oral mucosa without a causative identifiable lesion. The discomfort is usually of daily recurrence, with a higher incidence among people aged 50 to 60 years, affecting mostly the female sex and diminishing their quality of life. The aim of this study was to evaluate the association between several pathogenic factors and burning mouth syndrome. 736 medical records of patients diagnosed of burning mouth syndrome and 132 medical records for the control group were studied retrospectively. The study time span was from January 1990 to December 2014. The protocol included: sex, age, type of oral discomfort and location, among other factors. Analysis of the association between pathogenic factors and BMS diagnosis revealed that only 3 factors showed a statistically significant association: triggers (P=.003), parafunctional habits (P=.006), and oral hygiene (P=.012). There were neither statistically significant differences in BMS incidence between sex groups (P=.408) nor association of BMS with the pathogenic factors of substance abuse (P=.915), systemic pathology (P=.685), and dietary habits (P=.904). Parafunctional habits like bruxism and abnormal movements of tongue and lips can explain the BMS main symptomatology. Psychological aspects and systemic factors should be always considered. As a multifactorial disorder, the treatment of BMS should be executed in a holistic way. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.

  16. Model based Binaural Enhancement of Voiced and Unvoiced Speech

    DEFF Research Database (Denmark)

    Kavalekalam, Mathew Shaji; Christensen, Mads Græsbøll; Boldt, Jesper B.

    2017-01-01

    This paper deals with the enhancement of speech in presence of non-stationary babble noise. A binaural speech enhancement framework is proposed which takes into account both the voiced and unvoiced speech production model. The usage of this model in enhancement requires the Short term predictor...... (STP) parameters and the pitch information to be estimated. This paper uses a codebook based approach for estimating the STP parameters and a parametric binaural method is proposed for estimating the pitch parameters. Improvements in objective score are shown when using the voicedunvoiced speech model...

  17. Text-to-audiovisual speech synthesizer for children with learning disabilities.

    Science.gov (United States)

    Mendi, Engin; Bayrak, Coskun

    2013-01-01

    Learning disabilities affect the ability of children to learn, despite their having normal intelligence. Assistive tools can highly increase functional capabilities of children with learning disorders such as writing, reading, or listening. In this article, we describe a text-to-audiovisual synthesizer that can serve as an assistive tool for such children. The system automatically converts an input text to audiovisual speech, providing synchronization of the head, eye, and lip movements of the three-dimensional face model with appropriate facial expressions and word flow of the text. The proposed system can enhance speech perception and help children having learning deficits to improve their chances of success.

  18. Management of non-progressive dysarthria: practice patterns of speech and language therapists in the Republic of Ireland.

    Science.gov (United States)

    Conway, Aifric; Walshe, Margaret

    2015-01-01

    Dysarthria is a commonly acquired speech disorder. Rising numbers of people surviving stroke and traumatic brain injury (TBI) mean the numbers of people with non-progressive dysarthria are likely to increase, with increased challenges for speech and language therapists (SLTs), service providers and key stakeholders. The evidence base for assessment and intervention approaches with this population remains limited with clinical guidelines relying largely on clinical experience, expert opinion and limited research. Furthermore, there is currently little evidence on the practice behaviours of SLTs available. To investigate whether SLTs in the Republic of Ireland (ROI) vary in how they assess and manage adults with non-progressive dysarthria; to explore SLTs' use of the theoretical principles that influence therapeutic approaches; to identify challenges perceived by SLTs when working with adults with non-progressive dysarthria; and to determine SLTs' perceptions of further training needs. A 33-item survey questionnaire was devised and disseminated electronically via SurveyMonkey to SLTs working with non-progressive dysarthria in the ROI. SLTs were identified through e-mail lists for special-interest groups, SLT manager groups and general SLT mailing lists. A reminder e-mail was sent to all SLTs 3 weeks later following the initial e-mail containing the survey link. The survey remained open for 6 weeks. Questionnaire responses were analysed using descriptive statistics. Qualitative comments to open-ended questions were analysed through thematic analysis. Eighty SLTs responded to the survey. Sixty-seven of these completed the survey in full. SLTs provided both quantitative and qualitative data regarding their assessment and management practices in this area. Practice varied depending on the context of the SLT service, experience of SLTs and the resources available to them. Not all SLTs used principles such as motor programming or neural plasticity to direct clinical work

  19. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  20. Understanding the nature of apraxia of speech: Theory, analysis, and treatment

    Directory of Open Access Journals (Sweden)

    Kirrie J. Ballard

    2010-08-01

    Full Text Available Researchers have interpreted the behaviours of individuals with acquired apraxia of speech (AOS as impairment of linguistic phonological processing, motor control, or both. Acoustic, kinematic, and perceptual studies of speech in more recent years have led to significant advances in our understanding of the disorder and wide acceptance that it affects phonetic - motoric planning of speech. However, newly developed methods for studying nonspeech motor control are providing new insights, indicating that the motor control impairment of AOS extends beyond speech and is manifest in nonspeech movements of the oral structures. We present the most recent developments in theory and methods to examine and define the nature of AOS. Theories of the disorder are then related to existing treatment approaches and the efficacy of these approaches is examined. Directions for development of new treatments are posited. It is proposed that treatment programmes driven by a principled account of how the motor system learns to produce skilled actions will provide the most efficient and effective framework for treating motorbased speech disorders. In turn, well controlled and theoretically motivated studies of treatment efficacy promise to stimulate further development of theoretical accounts and contribute to our understanding of AOS.

  1. A characterization of verb use in Turkish agrammatic narrative speech.

    Science.gov (United States)

    Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien

    2016-01-01

    This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.

  2. An evaluation of speech production in two boys with neurodevelopmental disorders who received communication intervention with a speech-generating device.

    Science.gov (United States)

    Roche, Laura; Sigafoos, Jeff; Lancioni, Giulio E; O'Reilly, Mark F; Schlosser, Ralf W; Stevens, Michelle; van der Meer, Larah; Achmadi, Donna; Kagohara, Debora; James, Ruth; Carnett, Amarie; Hodis, Flaviu; Green, Vanessa A; Sutherland, Dean; Lang, Russell; Rispoli, Mandy; Machalicek, Wendy; Marschik, Peter B

    2014-11-01

    Children with neurodevelopmental disorders often present with little or no speech. Augmentative and alternative communication (AAC) aims to promote functional communication using non-speech modes, but it might also influence natural speech production. To investigate this possibility, we provided AAC intervention to two boys with neurodevelopmental disorders and severe communication impairment. Intervention focused on teaching the boys to use a tablet computer-based speech-generating device (SGD) to request preferred stimuli. During SGD intervention, both boys began to utter relevant single words. In an effort to induce more speech, and investigate the relation between SGD availability and natural speech production, the SGD was removed during some requesting opportunities. With intervention, both participants learned to use the SGD to request preferred stimuli. After learning to use the SGD, both participants began to respond more frequently with natural speech when the SGD was removed. The results suggest that a rehabilitation program involving initial SGD intervention, followed by subsequent withdrawal of the SGD, might increase the frequency of natural speech production in some children with neurodevelopmental disorders. This effect could be an example of response generalization. Copyright © 2014 ISDN. Published by Elsevier Ltd. All rights reserved.

  3. Molecular characterization of SAT 2 foot-and-mouth disease virus from post-outbreak slaughtered animals: implications for disease control in Uganda

    DEFF Research Database (Denmark)

    Balinda, Sheila N; Belsham, Graham; Masembe, Charles

    2010-01-01

    In Uganda, limiting the extent of foot-and-mouth disease (FMD) spread during outbreaks involves short term measures such as ring vaccination and restrictions to the movement of livestock and their products to and from the affected areas. In this study, the presence of FMD virus RNA was investigated...

  4. Control of tongue movements in speech: The Equilibrium point Hypothesis perspective

    OpenAIRE

    Perrier , Pascal; Loevenbruck , Hélène; Payan , Yohan

    1996-01-01

    In this paper , the application of the Equilibrium Point Hypothesis— originally proposed by Feldman for the control of limb movements— to speech control is analysed . In the first part , physiological data published in the literature which argue in favour of such control for the tongue are presented and the possible role of this motor process in a global control model of the tongue is explicated . In the second part , using the example of the acoustic variability associated with vowel reducti...

  5. Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds

    Science.gov (United States)

    Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.

    2011-01-01

    Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…

  6. Mouth and neck radiation - discharge

    Science.gov (United States)

    ... DO NOT eat spicy foods, acidic foods, or foods that are very hot or cold. These will bother your mouth and throat. Use lip care products to keep your lips from drying out and cracking. Sip water to ease mouth ...

  7. Influence of mouth opening on oropharyngeal humidification and temperature in a bench model of neonatal continuous positive airway pressure.

    Science.gov (United States)

    Fischer, Hendrik S; Ullrich, Tim L; Bührer, Christoph; Czernik, Christoph; Schmalisch, Gerd

    2017-02-01

    Clinical studies show that non-invasive respiratory support by continuous positive airway pressure (CPAP) affects gas conditioning in the upper airways, especially in the presence of mouth leaks. Using a new bench model of neonatal CPAP, we investigated the influence of mouth opening on oropharyngeal temperature and humidity. The model features the insertion of a heated humidifier between an active model lung and an oropharyngeal head model to simulate the recurrent expiration of heated, humidified air. During unsupported breathing, physiological temperature and humidity were attained inside the model oropharynx, and mouth opening had no significant effect on oropharyngeal temperature and humidity. During binasal CPAP, the impact of mouth opening was investigated using three different scenarios: no conditioning in the CPAP circuit, heating only, and heated humidification. Mouth opening had a strong negative impact on oropharyngeal humidification in all tested scenarios, but heated humidification in the CPAP circuit maintained clinically acceptable humidity levels regardless of closed or open mouths. The model can be used to test new equipment for use with CPAP, and to investigate the effects of other methods of non-invasive respiratory support on gas conditioning in the presence of leaks. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Deep neural network and noise classification-based speech enhancement

    Science.gov (United States)

    Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei

    2017-07-01

    In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.

  9. Neuroanatomical correlates of childhood apraxia of speech: A connectomic approach.

    Science.gov (United States)

    Fiori, Simona; Guzzetta, Andrea; Mitra, Jhimli; Pannek, Kerstin; Pasquariello, Rosa; Cipriani, Paola; Tosetti, Michela; Cioni, Giovanni; Rose, Stephen E; Chilosi, Anna

    2016-01-01

    Childhood apraxia of speech (CAS) is a paediatric speech sound disorder in which precision and consistency of speech movements are impaired. Most children with idiopathic CAS have normal structural brain MRI. We hypothesize that children with CAS have altered structural connectivity in speech/language networks compared to controls and that these altered connections are related to functional speech/language measures. Whole brain probabilistic tractography, using constrained spherical deconvolution, was performed for connectome generation in 17 children with CAS and 10 age-matched controls. Fractional anisotropy (FA) was used as a measure of connectivity and the connections with altered FA between CAS and controls were identified. Further, the relationship between altered FA and speech/language scores was determined. Three intra-hemispheric/interhemispheric subnetworks showed reduction of FA in CAS compared to controls, including left inferior (opercular part) and superior (dorsolateral, medial and orbital part) frontal gyrus, left superior and middle temporal gyrus and left post-central gyrus (subnetwork 1); right supplementary motor area, left middle and inferior (orbital part) frontal gyrus, left precuneus and cuneus, right superior occipital gyrus and right cerebellum (subnetwork 2); right angular gyrus, right superior temporal gyrus and right inferior occipital gyrus (subnetwork 3). Reduced FA of some connections correlated with diadochokinesis, oromotor skills, expressive grammar and poor lexical production in CAS. These findings provide evidence of structural connectivity anomalies in children with CAS across specific brain regions involved in speech/language function. We propose altered connectivity as a possible epiphenomenon of complex pathogenic mechanisms in CAS which need further investigation.

  10. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Understanding the power of word-of-mouth.

    OpenAIRE

    Suzana Z. Gildin

    2003-01-01

    Word-of-mouth has been considered one of the most powerful forms of communication in the market today. Understanding what makes word-of-mouth such a persuasive and powerful communication tool is important to organizations that intend to build strong relationships with consumers. For this reason, organizations are concerned about promoting positive word-of-mouth and retarding negative word-of-mouth, which can be harmful to the image of the company or a brand. This work focuses on the major asp...

  12. Gesture-speech integration in children with specific language impairment.

    Science.gov (United States)

    Mainela-Arnold, Elina; Alibali, Martha W; Hostetter, Autumn B; Evans, Julia L

    2014-11-01

    Previous research suggests that speakers are especially likely to produce manual communicative gestures when they have relative ease in thinking about the spatial elements of what they are describing, paired with relative difficulty organizing those elements into appropriate spoken language. Children with specific language impairment (SLI) exhibit poor expressive language abilities together with within-normal-range nonverbal IQs. This study investigated whether weak spoken language abilities in children with SLI influence their reliance on gestures to express information. We hypothesized that these children would rely on communicative gestures to express information more often than their age-matched typically developing (TD) peers, and that they would sometimes express information in gestures that they do not express in the accompanying speech. Participants were 15 children with SLI (aged 5;6-10;0) and 18 age-matched TD controls. Children viewed a wordless cartoon and retold the story to a listener unfamiliar with the story. Children's gestures were identified and coded for meaning using a previously established system. Speech-gesture combinations were coded as redundant if the information conveyed in speech and gesture was the same, and non-redundant if the information conveyed in speech was different from the information conveyed in gesture. Children with SLI produced more gestures than children in the TD group; however, the likelihood that speech-gesture combinations were non-redundant did not differ significantly across the SLI and TD groups. In both groups, younger children were significantly more likely to produce non-redundant speech-gesture combinations than older children. The gesture-speech integration system functions similarly in children with SLI and TD, but children with SLI rely more on gesture to help formulate, conceptualize or express the messages they want to convey. This provides motivation for future research examining whether interventions

  13. SPEECH ACT ANALYSIS OF IGBO UTTERANCES IN FUNERAL ...

    African Journals Online (AJOL)

    Dean SPGS NAU

    In other words, a speech act is a .... relationship with that one single person and to share those memories ... identifies four conditions or rules for the effective performance of a ... In other words, the rules establish a system for the ... 54 shaped by the interplay of particular speech acts and non verbal cues. ..... Retrieved from.

  14. Comparison of the Streptococcus mutans and Lactobacillus colony count changes in saliva following chlorhexidine (0.12% mouth rinse, combination mouth rinse, and green tea extract (0.5% mouth rinse in children

    Directory of Open Access Journals (Sweden)

    Rahul J Hegde

    2017-01-01

    Full Text Available Background: Compounds present in green tea have been shown to inhibit the growth and activity of bacteria associated with oral infections. The purpose of this study was to compare the efficacy of chlorhexidine (0.12% mouth rinse and combination (chlorhexidine and sodium fluoride mouth rinse to that of green tea extract (0.5% mouth rinse in reducing the salivary count of Streptococcus mutans and Lactobacillus in children. Materials and Methods: The sample for the study consisted of 75 school children aged 8–12 years with four or more (decay component of decayed, missing, and filled teeth index. Children were divided randomly into three equal groups and were asked to rinse with the prescribed mouth rinse once daily for 2 weeks after breakfast under supervision. Nonstimulated whole salivary sample (2 ml was collected at baseline and postrinsing and tested for the colony forming units of S. mutans and Lactobacillus. Results: The results of the study indicate that there was a statistically significant reduction in S. mutans and lactobacilli count in all the three study groups. The statistically significant reduction in the mean S. mutans and lactobacilli counts were more in 0.12% chlorhexidine group than in the combination mouth rinse and 0.5% green tea mouth rinse group. There was no statistically significant difference in the reduction of S. mutans and lactobacilli count between combination mouth rinse group and 0.5% green tea mouth rinse group. Conclusion: Green tea mouth rinse can be a promising preventive therapy worldwide for the prevention of dental caries.

  15. Analysis of normal anatomy of oral cavity in open-mouth view with CT and MRI; comparison with closed-mouth view

    International Nuclear Information System (INIS)

    Kim, Chan Ho; Kim, Seong Min; Cheon, Bont Jin; Huh, Jin Do; Joh, Young Duk

    2001-01-01

    When MRI and CT of the oral cavity utilize the traditional closed-mouth approach, direct contact between the tongue and surrounding structures may give rise to difficulty in recognizing the anatomy involved and demonstrating the possible presence of pathologic features. We describe a more appropriate scan technique, involving open-mouthed imaging, which may be used to demonstrate the anatomy of the oral cavity in detail. Axial and coronal MR imaging and axial CT scanning were performed in 14 healthy volunteers, using both the closed and open-mouth approach. For the latter, a mouth-piece was put in place prior to examination. In all volunteers, open-mouth MR and CT examinations involved the same parameters as the corresponding closed-mouth procedures. The CT and MR images obtained by each method were compared, particular attention being paid to the presence and symmetry of motion artifact of the tongue and the extent of air space in the oral cavity. Comparative imaging analysis was based on the recognition of 13 structures around the boundaries of the mouth. For statistical analysis, student's test was used and a p value<0.05 was considered significant. Due to symmetry of the tongue, a less severe motion artifact, and increased air space in the oral cavity, the open-mouth method produced excellent images. The axial and coronal MR images thus obtained were superior in terms of demarcation of the inferior surface and dortsum of the tongue, gingiva, buccal surface and buccal vestivule to those obtained with the mouth closed (p<0.05). In addition, axial MR images obtained with the mouth open showed better demarcation of structures at the lingual margin and anterior belly of the digastric muscle (p<0.05), while coronal MR images of the base of the tongue, surface of the hard palate, soft palate, and uvula, were also superior (p<0.05). Open-mouth CT provided better images at the lingual margin, dorsum of the tongue and buccal surface than the closed-mouth approach (p<0

  16. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    Science.gov (United States)

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  17. Realization of masticatory movement by 3-dimensional simulation of the temporomandibular joint and the masticatory muscles.

    Science.gov (United States)

    Park, Jong-Tae; Lee, Jae-Gi; Won, Sung-Yoon; Lee, Sang-Hee; Cha, Jung-Yul; Kim, Hee-Jin

    2013-07-01

    Masticatory muscles are closely involved in mastication, pronunciation, and swallowing, and it is therefore important to study the specific functions and dynamics of the mandibular and masticatory muscles. However, the shortness of muscle fibers and the diversity of movement directions make it difficult to study and simplify the dynamics of mastication. The purpose of this study was to use 3-dimensional (3D) simulation to observe the functions and movements of each of the masticatory muscles and the mandible while chewing. To simulate the masticatory movement, computed tomographic images were taken from a single Korean volunteer (30-year-old man), and skull image data were reconstructed in 3D (Mimics; Materialise, Leuven, Belgium). The 3D-reconstructed masticatory muscles were then attached to the 3D skull model. The masticatory movements were animated using Maya (Autodesk, San Rafael, CA) based on the mandibular motion path. During unilateral chewing, the mandible was found to move laterally toward the functional side by contracting the contralateral lateral pterygoid and ipsilateral temporalis muscles. During the initial mouth opening, only hinge movement was observed at the temporomandibular joint. During this period, the entire mandible rotated approximately 13 degrees toward the bicondylar horizontal plane. Continued movement of the mandible to full mouth opening occurred simultaneously with sliding and hinge movements, and the mandible rotated approximately 17 degrees toward the center of the mandibular ramus. The described approach can yield data for use in face animation and other simulation systems and for elucidating the functional components related to contraction and relaxation of muscles during mastication.

  18. Music and Speech Perception in Children Using Sung Speech.

    Science.gov (United States)

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  19. Kalman filter for speech enhancement in cocktail party scenarios using a codebook-based approach

    DEFF Research Database (Denmark)

    Kavalekalam, Mathew Shaji; Christensen, Mads Græsbøll; Gran, Fredrik

    2016-01-01

    Enhancement of speech in non-stationary background noise is a challenging task, and conventional single channel speech enhancement algorithms have not been able to improve the speech intelligibility in such scenarios. The work proposed in this paper investigates a single channel Kalman filter based...... trained codebook over a generic speech codebook in relation to the performance of the speech enhancement system....

  20. Prevalence and Predictors of Sjögren's Syndrome in Patients with Burning Mouth Symptoms.

    Science.gov (United States)

    Lee, Young Chan; Song, Ran; Yang, You-Jung; Eun, Young-Gyu

    To investigate the prevalence and predictive factors of Sjögren's syndrome (SS) in a cohort of patients with burning mouth symptoms. A total of 125 patients with burning mouth symptoms were enrolled in a prospective study and assessed for the presence of SS. The severity of oral symptoms was evaluated by using questionnaires. Salivary flow rates and salivary scintigraphy were used to evaluate salivary function. Patient laboratory work-ups were reviewed, and SS was diagnosed by a rheumatologist based on the American-European Consensus Group criteria. The differences between the SS patient group and the non-SS patient group were analyzed with chi-square test or t test. A total of 12 of the 125 enrolled patients (9.5%) had a positive autoimmune antibody test, and 6 (4.8% of the entire cohort) had SS (4 [3.2%] primary and 2 [1.6%] secondary). Patients with SS exhibited significantly decreased hemoglobin levels, an increased erythrocyte sedimentation rate, and an increased prevalence of autoantibody positive results compared to non-SS patients. Salivary scintigraphy showed that the uptake ratio of the submandibular gland in SS patients was decreased significantly. The prevalence of SS in patients with burning mouth symptoms was 4.8%. Therefore, clinicians who treat patients with burning mouth symptoms should evaluate laboratory findings and salivary functions to identify patients with SS.

  1. Hand, Foot, and Mouth Disease

    Centers for Disease Control (CDC) Podcasts

    Hand, foot, and mouth disease is a contagious illness that mainly affects children under five. In this podcast, Dr. Eileen Schneider talks about the symptoms of hand, foot, and mouth disease, how it spreads, and ways to help protect yourself and your children from getting infected with the virus.

  2. Effectiveness of Eye Movement Desensitization and Reprocessing Therapy on Public Speaking Anxiety of University Students

    Directory of Open Access Journals (Sweden)

    Jalil Aslani

    2014-08-01

    Full Text Available Background: Public speaking anxiety is a prominent problem in the college student population. The purpose of this study was to determine the effectiveness of eye movement desensitization and reprocessing on public speaking anxiety of college students. Materials and Methods: The design of research was quasi-experimental with pre-post test type, and control group. The sample consistent of 30 students with speech anxiety that selected base on available sampling and assigned randomly in experimental (N=15 and control (N=15 groups. The experimental group was treated with EMDR therapy for 7 sessions. In order to collect the data, Paul’s personal report of confidence as a speaker, S-R inventory of anxiousness was used. To analyze the data, SPSS-19 software and covariance analysis were used. Results: The multivariate analysis of covariance showed that the eye movement desensitization and reprocessing reducing public speaking anxiety. The one-way analysis of covariance for each variable shows there are significant differences in confidence of speaker (p=0.001 and physiological symptoms of speech anxiety (p=0.001 at the two groups. Conclusion: These results suggest that treatment of eye movement desensitization and reprocessing is effective on reducing physiological symptoms of speech anxiety and increasing the speaker’s confidence.

  3. Quantifying the quality of hand movement in stroke patients through three-dimensional curvature

    Directory of Open Access Journals (Sweden)

    Osu Rieko

    2011-10-01

    Full Text Available Abstract Background To more accurately evaluate rehabilitation outcomes in stroke patients, movement irregularities should be quantified. Previous work in stroke patients has revealed a reduction in the trajectory smoothness and segmentation of continuous movements. Clinically, the Stroke Impairment Assessment Set (SIAS evaluates the clumsiness of arm movements using an ordinal scale based on the examiner's observations. In this study, we focused on three-dimensional curvature of hand trajectory to quantify movement, and aimed to establish a novel measurement that is independent of movement duration. We compared the proposed measurement with the SIAS score and the jerk measure representing temporal smoothness. Methods Sixteen stroke patients with SIAS upper limb proximal motor function (Knee-Mouth test scores ranging from 2 (incomplete performance to 4 (mild clumsiness were recruited. Nine healthy participant with a SIAS score of 5 (normal also participated. Participants were asked to grasp a plastic glass and repetitively move it from the lap to the mouth and back at a conformable speed for 30 s, during which the hand movement was measured using OPTOTRAK. The position data was numerically differentiated and the three-dimensional curvature was computed. To compare against a previously proposed measure, the mean squared jerk normalized by its minimum value was computed. Age-matched healthy participants were instructed to move the glass at three different movement speeds. Results There was an inverse relationship between the curvature of the movement trajectory and the patient's SIAS score. The median of the -log of curvature (MedianLC correlated well with the SIAS score, upper extremity subsection of Fugl-Meyer Assessment, and the jerk measure in the paretic arm. When the healthy participants moved slowly, the increase in the jerk measure was comparable to the paretic movements with a SIAS score of 2 to 4, while the MedianLC was distinguishable

  4. Sleep-related movement disorders.

    Science.gov (United States)

    Merlino, Giovanni; Gigli, Gian Luigi

    2012-06-01

    Several movement disorders may occur during nocturnal rest disrupting sleep. A part of these complaints is characterized by relatively simple, non-purposeful and usually stereotyped movements. The last version of the International Classification of Sleep Disorders includes these clinical conditions (i.e. restless legs syndrome, periodic limb movement disorder, sleep-related leg cramps, sleep-related bruxism and sleep-related rhythmic movement disorder) under the category entitled sleep-related movement disorders. Moreover, apparently physiological movements (e.g. alternating leg muscle activation and excessive hypnic fragmentary myoclonus) can show a high frequency and severity impairing sleep quality. Clinical and, in specific cases, neurophysiological assessments are required to detect the presence of nocturnal movement complaints. Patients reporting poor sleep due to these abnormal movements should undergo non-pharmacological or pharmacological treatments.

  5. Mouth Rinses

    Science.gov (United States)

    ... with more severe oral problems, such as cavities, periodontal disease, gum inflammation, and xerostomia (dry mouth). Therapeutic ... fight up to 50 percent more of the bacteria that cause cavities, and most rinses are effective ...

  6. Apraxia of Speech

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  7. How musical expertise shapes speech perception: evidence from auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel

    2015-09-24

    It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.

  8. What makes a movement a gesture?

    Science.gov (United States)

    Novack, Miriam A; Wakefield, Elizabeth M; Goldin-Meadow, Susan

    2016-01-01

    Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Web-Based Live Speech-Driven Lip-Sync

    OpenAIRE

    Llorach, Gerard; Evans, Alun; Blat, Josep; Grimm, Giso; Hohmann, Volker

    2016-01-01

    Virtual characters are an integral part of many games and virtual worlds. The ability to accurately synchronize lip movement to audio speech is an important aspect in the believability of the character. In this paper we propose a simple rule-based lip-syncing algorithm for virtual agents using the web browser. It works in real-time with live input, unlike most current lip-syncing proposals, which may require considerable amounts of computation, expertise and time to set up. Our method gen...

  10. Proactive Risk Assessments and the Continuity of Business Principles: Perspectives on This Novel, Combined Approach to Develop Guidance for the Permitted Movement of Agricultural Products during a Foot-and-Mouth Disease Outbreak in the United States.

    Science.gov (United States)

    Goldsmith, Timothy J; Culhane, Marie Rene; Sampedro, Fernando; Cardona, Carol J

    2016-01-01

    Animal diseases such as foot-and-mouth disease (FMD) have the potential to severely impact food animal production systems. Paradoxically, the collateral damage associated with the outbreak response may create a larger threat to the food supply, social stability, and economic viability of rural communities than the disease itself. When FMD occurs in domestic animals, most developed countries will implement strict movement controls in the area surrounding the infected farm(s). Historically, stopping all animal movements has been considered one of the most effective ways to control FMD and stop disease spread. However, stopping all movements in an area comes at a cost, as there are often uninfected herds and flocks within the control area. The inability to harvest uninfected animals and move their products to processing interrupts the food supply chain and has the potential to result in an enormous waste of safe, nutritious animal products, and create animal welfare situations. In addition, these adverse effects may negatively impact agriculture businesses and the related economy. Effective disease control measures and the security of the food supply thus require a balanced approach based on science and practicality. Evaluating the risks associated with the movement of live animals and products before an outbreak happens provides valuable insights for risk management plans. These plans can optimize animal and product movements while preventing disease spread. Food security benefits from emergency response plans that both control the disease and keep our food system functional. Therefore, emergency response plans must aim to minimize the unintended negative consequence to farmers, food processors, rural communities, and ultimately consumers.

  11. Mouth Problems

    Science.gov (United States)

    ... such as sores, are very common. Follow this chart for more information about mouth problems in adults. ... cancers. See your dentist if sharp or rough teeth or dental work are causing irritation. Start OverDiagnosisThis ...

  12. Mandibular Range of Movement and Pain Intensity in Patients with Anterior Disc Displacement without Reduction

    Directory of Open Access Journals (Sweden)

    Marijana Gikić

    2015-01-01

    Full Text Available Objective: Temporomandibular disorders (TMD are the most common source of orofacial pain of a non-dental origin. The study was performed to investigate the therapeutic effect of the conventional occlusal splint therapy and the physical therapy. The hypothesis tested was that the simultaneous use of occlusal splint and physical therapy is an effective method for treatment of anterior disc displacement without reduction. Materials and Methods: Twelve patients (mean age =30.5 y with anterior disc displacement without reduction (according to RDC/TMD and confirmed by magnetic resonance imaging were randomly allocated into 2 groups: 6 received stabilization splint (SS and 6 received both physical therapy and stabilization splint (SS&PT. Treatment outcomes included pain-free opening (MCO, maximum assisted opening (MAO, path of mouth opening and pain as reported on visual analogue scale (VAS. Results: At baseline of treatment there were no significant differences among the groups for VAS scores, as well as for the range of mandibular movement. VAS scores improved significantly over time for the SS&PT group (F=28.964, p=0.0001, effect size =0.853 and SS group (F=8.794, p=0.001, effect size =0.638. The range of mouth opening improved significantly only in the SS&PT group (MCO: F=20.971, p=0.006; MAO: F=24.014, p=0.004 (Figure 2. Changes in path of mouth opening differ significantly between the groups (p=0.040. Only 1 patient in SS&PT group still presented deviations in mouth opening after completed therapy while in the SS group deviations were present in 5 patients after completed therapy. Conclusion: This limited study gave evidence that during the treatment period lasting for 6 months, the simultaneous use of stabilization splint and physical therapy was more efficient in reducing deviations and improving range of mouth opening than the stabilization splint used alone. Both treatment options were efficient in reducing pain in patients with anterior disc

  13. Changes in speech production in a child with a cochlear implant: acoustic and kinematic evidence.

    Science.gov (United States)

    Goffman, Lisa; Ertmer, David J; Erdle, Christa

    2002-10-01

    A method is presented for examining change in motor patterns used to produce linguistic contrasts. In this case study, the method is applied to a child receiving new auditory input following cochlear implantation. This child experienced hearing loss at age 3 years and received a multichannel cochlear implant at age 7 years. Data collection points occurred both pre- and postimplant and included acoustic and kinematic analyses. Overall, this child's speech output was transcribed as accurate across the pre- and postimplant periods. Postimplant, with the onset of new auditory experience, acoustic durations showed a predictable maturational change, usually decreasing in duration. Conversely, the spatiotemporal stability of speech movements initially became more variable postimplantation. The auditory perturbations experienced by this child during development led to changes in the physiological underpinnings of speech production, even when speech output was perceived as accurate.

  14. The Relationship between Articulatory Control and Improved Phonemic Accuracy in Childhood Apraxia of Speech: A Longitudinal Case Study

    Science.gov (United States)

    Grigos, Maria I.; Kolenda, Nicole

    2010-01-01

    Jaw movement patterns were examined longitudinally in a 3-year-old male with childhood apraxia of speech (CAS) and compared with a typically developing control group. The child with CAS was followed for 8 months, until he began accurately and consistently producing the bilabial phonemes /p/, /b/, and /m/. A movement tracking system was used to…

  15. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  16. Mouth Problems in Infants and Children

    Science.gov (United States)

    ... mouth can be painful and worrisome. Follow this chart for more information about common causes of mouth ... as GINGIVITIS or PERIODONTITIS, usually caused by poor DENTAL HYGIENE. Self CareTake your child to the dentist. ...

  17. Using EEG and stimulus context to probe the modelling of auditory-visual speech.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2016-02-01

    We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  18. Cross-linguistic perspectives on speech assessment in cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Henningsson, Gunilla

    2012-01-01

    . Finally, the influence of different languages on some aspects of language acquisition in young children with cleft palate is presented and discussed. Until recently, not much has been written about cross linguistic perspectives when dealing with cleft palate speech. Most literature about assessment......This chapter deals with cross linguistic perspectives that need to be taken into account when comparing speech assessment and speech outcome obtained from cleft palate speakers of different languages. Firstly, an overview of consonants and vowels vulnerable to the cleft condition is presented. Then......, consequences for assessment of cleft palate speech by native versus non-native speakers of a language are discussed, as well as the use of phonemic versus phonetic transcription in cross linguistic studies. Specific recommendations for the construction of speech samples in cross linguistic studies are given...

  19. A characterization of verb use in Turkish agrammatic narrative speech

    NARCIS (Netherlands)

    Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien

    2016-01-01

    This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large

  20. Electronic games of movement: it is sport or simulation in the perception of young people?

    Directory of Open Access Journals (Sweden)

    Ana Paula Salles da Silva

    2017-09-01

    Full Text Available Electronic games have been one of the main ways of access of young to technology in Brazil, leading to new experiences in social practices.The objective of this study is to identify the perception of young people on the experience of electronic games of movement with sports theme. Methodology: 24 young elementary school students were investigated, divided into 3 groups. Each group participated in 10 sessions with electronic games of movement of 3 hours each. During the sessions the speeches of the young people were recorded in a field diary. Results: departing from the speeches of young people the experiment with electronic games of movement emerges as a mediated and unique experience. It is mediated because it interposes itself between subject and object and it is unique because the way is the experience itself.Conclusions: the perception of the young people indicates a conceptual enlargement in which the comprehension of sports is expanded by the experiences with technology.

  1. Foot-and-mouth disease virus non-structural protein 3A inhibits the interferon-β signaling pathway

    Science.gov (United States)

    Li, Dan; Lei, Caoqi; Xu, Zhisheng; Yang, Fan; Liu, Huanan; Zhu, Zixiang; Li, Shu; Liu, Xiangtao; Shu, Hongbing; Zheng, Haixue

    2016-01-01

    Foot-and-mouth disease virus (FMDV) is the etiological agent of FMD, which affects cloven-hoofed animals. The pathophysiology of FMDV has not been fully understood and the evasion of host innate immune system is still unclear. Here, the FMDV non-structural protein 3A was identified as a negative regulator of virus-triggered IFN-β signaling pathway. Overexpression of the FMDV 3A inhibited Sendai virus-triggered activation of IRF3 and the expressions of RIG-I/MDA5. Transient transfection and co-immunoprecipitation experiments suggested that FMDV 3A interacts with RIG-I, MDA5 and VISA, which is dependent on the N-terminal 51 amino acids of 3A. Furthermore, 3A also inhibited the expressions of RIG-I, MDA5, and VISA by disrupting their mRNA levels. These results demonstrated that 3A inhibits the RLR-mediated IFN-β induction and uncovered a novel mechanism by which the FMDV 3A protein evades the host innate immune system. PMID:26883855

  2. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  3. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    Full Text Available In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively

  4. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  5. Co-variation of tonality in the music and speech of different cultures.

    Directory of Open Access Journals (Sweden)

    Shui' er Han

    Full Text Available Whereas the use of discrete pitch intervals is characteristic of most musical traditions, the size of the intervals and the way in which they are used is culturally specific. Here we examine the hypothesis that these differences arise because of a link between the tonal characteristics of a culture's music and its speech. We tested this idea by comparing pitch intervals in the traditional music of three tone language cultures (Chinese, Thai and Vietnamese and three non-tone language cultures (American, French and German with pitch intervals between voiced speech segments. Changes in pitch direction occur more frequently and pitch intervals are larger in the music of tone compared to non-tone language cultures. More frequent changes in pitch direction and larger pitch intervals are also apparent in the speech of tone compared to non-tone language cultures. These observations suggest that the different tonal preferences apparent in music across cultures are closely related to the differences in the tonal characteristics of voiced speech.

  6. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  7. Typical versus delayed speech onset influences verbal reporting of autistic interests.

    Science.gov (United States)

    Chiodo, Liliane; Majerus, Steve; Mottron, Laurent

    2017-01-01

    The distinction between autism and Asperger syndrome has been abandoned in the DSM-5. However, this clinical categorization largely overlaps with the presence or absence of a speech onset delay which is associated with clinical, cognitive, and neural differences. It is unknown whether these different speech development pathways and associated cognitive differences are involved in the heterogeneity of the restricted interests that characterize autistic adults. This study tested the hypothesis that speech onset delay, or conversely, early mastery of speech, orients the nature and verbal reporting of adult autistic interests. The occurrence of a priori defined descriptors for perceptual and thematic dimensions were determined, as well as the perceived function and benefits, in the response of autistic people to a semi-structured interview on their intense interests. The number of words, grammatical categories, and proportion of perceptual / thematic descriptors were computed and compared between groups by variance analyses. The participants comprised 40 autistic adults grouped according to the presence ( N  = 20) or absence ( N  = 20) of speech onset delay, as well as 20 non-autistic adults, also with intense interests, matched for non-verbal intelligence using Raven's Progressive Matrices. The overall nature, function, and benefit of intense interests were similar across autistic subgroups, and between autistic and non-autistic groups. However, autistic participants with a history of speech onset delay used more perceptual than thematic descriptors when talking about their interests, whereas the opposite was true for autistic individuals without speech onset delay. This finding remained significant after controlling for linguistic differences observed between the two groups. Verbal reporting, but not the nature or positive function, of intense interests differed between adult autistic individuals depending on their speech acquisition history: oral reporting of

  8. Evaluation of movements of lower limbs in non-professional ballet dancers: hip abduction and flexion

    OpenAIRE

    Valenti, Erica E; Valenti, Vitor E; Ferreira, Celso; Vanderlei, Luiz C M; Moura Filho, Oseas F; de Carvalho, Dias T; Tassi, Nadir; Petenusso, Marcio; Leone, Claudio; Fujiki, Edison N; Junior, Hugo M; de Mello Monteiro, Carlos B; Moreno, Isadora L; Gonçalves, Ana C C; de Abreu, Luiz C

    2011-01-01

    Abstract Background The literature indicated that the majority of professional ballet dancers present static and active dynamic range of motion difference between left and right lower limbs, however, no previous study focused this difference in non-professional ballet dancers. In this study we aimed to evaluate active movements of the hip in non-professional classical dancers. Method...

  9. THE METHODOLOGY OF CASES SELECTION FOR TEACHING FOREGN SPEECH TO THE STUDENTS OF NON-LIGUISTIC SPECIALITIES

    Directory of Open Access Journals (Sweden)

    Tatyana Lozovskaya

    2015-10-01

    Full Text Available This article deals with the advantages of case-study and its potential in forming the motivation for studying the English language for students of non-linguistic specialities, psychology students in particular. Training future psychologists foreign language communication should involve cases, published in foreign periodicals, and numerous exercises and communicative tasks according to the requirements of the case-technology which is used during their learning process. The studies enable to single out the main criteria of cases selection for the successful formation of foreign speech with the students of psychological faculty.

  10. Speech Production and Speech Discrimination by Hearing-Impaired Children.

    Science.gov (United States)

    Novelli-Olmstead, Tina; Ling, Daniel

    1984-01-01

    Seven hearing impaired children (five to seven years old) assigned to the Speakers group made highly significant gains in speech production and auditory discrimination of speech, while Listeners made only slight speech production gains and no gains in auditory discrimination. Combined speech and auditory training was more effective than auditory…

  11. Distância interincisiva máxima em crianças respiradoras bucais Maximum interincisal distance in mouth breathing children

    Directory of Open Access Journals (Sweden)

    Débora Martins Cattoni

    2009-12-01

    Full Text Available INTRODUÇÃO: a distância interincisiva máxima é um importante aspecto na avaliação miofuncional orofacial, pois distúrbios miofuncionais orofaciais podem limitar a abertura da boca. OBJETIVO: mensurar a distância interincisiva máxima de crianças respiradoras bucais, relacionando-a com a idade, e comparar as médias dessas medidas com as médias dessa distância em crianças sem queixas fonoaudiológicas. MÉTODOS: participaram 99 crianças respiradoras bucais, de ambos os gêneros, com idades entre 7 anos e 11 anos e 11 meses, leucodermas, em dentadura mista. O grupo controle foi composto por 253 crianças, com idades entre 7 anos e 11 anos e 11 meses, leucodermas, em dentadura mista, sem queixas fonoaudiológicas. RESULTADOS: os achados evidenciam que a média das distâncias interincisivas máximas das crianças respiradoras bucais foi, no total da amostra, de 43,55mm, não apresentando diferença estatisticamente significativa entre as médias, segundo a idade. Não houve diferença estatisticamente significativa entre as médias da distância interincisiva máxima dos respiradores bucais e as médias dessa medida das crianças do grupo controle. CONCLUSÕES: a distância interincisiva máxima é uma medida que não variou nos respiradores bucais, durante a dentadura mista, segundo a idade, e parece não estar alterada em portadores desse tipo de disfunção. Aponta-se, também, a importância do uso do paquímetro na avaliação objetiva da distância interincisiva máxima.INTRODUCTION: The maximum interincisal distance is an important aspect in the orofacial myofunctional evaluation, because orofacial myofunctional disorders can limit the mouth opening. AIM: To describe the maximum interincisal distance of the mouth breathing children, according to age, and to compare the averages of the maximum interincisal distance of mouth breathing children to those of children with no history of speech-language pathology disorders. METHODS

  12. Serological prevalence of foot and mouth disease in parts of Keffi ...

    African Journals Online (AJOL)

    ... foot And Mouth disease in the herd commonly called “Boro” by the herdsmen. Screening procedure was based on antibodies detection for the non structural protein mainly 3ABC protein in bovine serum regardless of the serotype of FMD virus involved using Chekit-FMD-3ABC ELISA (Bommeli Diagnostics, South Africa).

  13. Dog-directed speech: why do we use it and do dogs pay attention to it?

    Science.gov (United States)

    Ben-Aderet, Tobey; Gallego-Abenza, Mario; Reby, David; Mathevon, Nicolas

    2017-01-11

    Pet-directed speech is strikingly similar to infant-directed speech, a peculiar speaking pattern with higher pitch and slower tempo known to engage infants' attention and promote language learning. Here, we report the first investigation of potential factors modulating the use of dog-directed speech, as well as its immediate impact on dogs' behaviour. We recorded adult participants speaking in front of pictures of puppies, adult and old dogs, and analysed the quality of their speech. We then performed playback experiments to assess dogs' reaction to dog-directed speech compared with normal speech. We found that human speakers used dog-directed speech with dogs of all ages and that the acoustic structure of dog-directed speech was mostly independent of dog age, except for sound pitch which was relatively higher when communicating with puppies. Playback demonstrated that, in the absence of other non-auditory cues, puppies were highly reactive to dog-directed speech, and that the pitch was a key factor modulating their behaviour, suggesting that this specific speech register has a functional value in young dogs. Conversely, older dogs did not react differentially to dog-directed speech compared with normal speech. The fact that speakers continue to use dog-directed with older dogs therefore suggests that this speech pattern may mainly be a spontaneous attempt to facilitate interactions with non-verbal listeners. © 2017 The Author(s).

  14. FLUIDITY SPEECH FORMATION AS A QUALITATIVE CHARACTERISTIC OF THE ORAL STATEMENT OF PRESCHOOL AGE CHILDREN WITH STUTTER

    Directory of Open Access Journals (Sweden)

    E. A. Borisova

    2014-01-01

    Full Text Available The research objective is to disclose the subject matter of speech therapy work focused on fluidity speech formation of preschool age children, suffering stutter. Stutter is a difficult disorder of articulation organs suchthat the tempo-rhythmical organisation of statements is distressed that leads to defects and failures of dialogue system, negatively influences on individual development of the child; more specifically it generates the mental stratifications, specific features of emotional-volitional sphere, and causes undesirable qualities ofcharacter such as shyness, indecision, isolation, negativism. The author notes that the problem of early stutter correction among junior preschool-aged children considered as topical and immediate issue. Methods. Concerning the clinical, physiological, psychological and psychologic-pedagogical positions, the author summarizes theoretical framework; an experimentally-practical approbation of an author's method of speech fluidity and stutter abolition of preschool children is described. Stage-by-stage process of correction,spontaneous and non-convulsive speech formation: 1. restraint mode application in order to decrease incorrect verbal output; 2. training exercises to long phonatory and speech expiration; 3. development of coordination and movements rhythm helping to pronounce words and phrases; 4. formation of situational speech, at first consisted of short sentences, then passing to long ones; 5. training to coherent text statements. The research demonstrates data analyses of postexperimental diagnostic examination of stuttering preschool children, proving the efficiency of the author’s applied method. Scientific novelty. The research findings demonstrate a specific approach to correction and stutter abolition of preschool children. Proposed author’s approach consists of complementary to each other directions of speech therapy work which are combines in the following way: coherent speech

  15. Effect of Long-term Smoking on Whole-mouth Salivary Flow Rate and Oral Health.

    Science.gov (United States)

    Rad, Maryam; Kakoie, Shahla; Niliye Brojeni, Fateme; Pourdamghan, Nasim

    2010-01-01

    Change in the resting whole-mouth salivary flow rate (SFR) plays a significant role in patho-genesis of various oral conditions. Factors such as smoking may affect SFR as well as the oral and dental health. The primary purpose of this study was to determine the effect of smoking on SFR, and oral and dental health. One-hundred smokers and 100 non-tobacco users were selected as case and control groups, respectively. A questionnaire was used to collect the demographic data and smoking habits. A previously used questionnaire about dry mouth was also employed. Then, after a careful oral examination, subjects' whole saliva was collected in the resting condition. Data was analyzed by chi-square test using SPSS 15. The mean (±SD) salivary flow rate were 0.38 (± 0.13) ml/min in smokers and 0.56 (± 0.16) ml/min in non-smokers. The difference was statistically significant (P=0.00001). Also, 39% of smokers and 12% of non-smokers reported experiencing at least one xerostomia symptom, with statistically significant difference between groups (p=0.0001). Oral lesions including cervical caries, gingivitis, tooth mobility, calculus and halitosis were significantly higher in smokers. Our findings indicated that long-term smoking would significantly reduce SFR and increase oral and dental disorders associated with dry mouth, especially cervical caries, gingivitis, tooth mobility, calculus, and halitosis.

  16. Burning mouth syndrome: A review

    Directory of Open Access Journals (Sweden)

    Rajendra G Patil

    2017-01-01

    Full Text Available Burning mouth syndrome is a condition characterized by chronic orofacial pain without any mucosal abnormalities or other organic disease. There are numerous synonyms for this ailment such as stomatodynia, stomatopyrosis, glossodynia, glossopyrosis, sore mouth, sore tongue, oral dysesthesia, and scalding mouth syndrome. Patients usually present with burning, stinging, or numbness on the tongue or other areas of oral mucosa. The complex etiology and lack of characteristic signs and symptoms makes the diagnosis difficult. As a result of which managing such patients become a herculean task. Moreover, lack of understanding of the disease leads to misdiagnosis and unnecessary referral of patients. In this article, the authors have described the etiopathogenesis, diagnostic algorithm and management of this confusing ailment.

  17. Stuttering Frequency, Speech Rate, Speech Naturalness, and Speech Effort During the Production of Voluntary Stuttering.

    Science.gov (United States)

    Davidow, Jason H; Grossman, Heather L; Edge, Robin L

    2018-05-01

    Voluntary stuttering techniques involve persons who stutter purposefully interjecting disfluencies into their speech. Little research has been conducted on the impact of these techniques on the speech pattern of persons who stutter. The present study examined whether changes in the frequency of voluntary stuttering accompanied changes in stuttering frequency, articulation rate, speech naturalness, and speech effort. In total, 12 persons who stutter aged 16-34 years participated. Participants read four 300-syllable passages during a control condition, and three voluntary stuttering conditions that involved attempting to produce purposeful, tension-free repetitions of initial sounds or syllables of a word for two or more repetitions (i.e., bouncing). The three voluntary stuttering conditions included bouncing on 5%, 10%, and 15% of syllables read. Friedman tests and follow-up Wilcoxon signed ranks tests were conducted for the statistical analyses. Stuttering frequency, articulation rate, and speech naturalness were significantly different between the voluntary stuttering conditions. Speech effort did not differ between the voluntary stuttering conditions. Stuttering frequency was significantly lower during the three voluntary stuttering conditions compared to the control condition, and speech effort was significantly lower during two of the three voluntary stuttering conditions compared to the control condition. Due to changes in articulation rate across the voluntary stuttering conditions, it is difficult to conclude, as has been suggested previously, that voluntary stuttering is the reason for stuttering reductions found when using voluntary stuttering techniques. Additionally, future investigations should examine different types of voluntary stuttering over an extended period of time to determine their impact on stuttering frequency, speech rate, speech naturalness, and speech effort.

  18. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Mouth sores

    Science.gov (United States)

    ... To help cold sores or fever blisters, you can also apply ice to the sore. You may reduce your chance of getting common mouth sores by: Avoiding very hot foods or beverages Reducing stress and practicing relaxation techniques like yoga or meditation ...

  20. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  1. Language Abstraction in Word of Mouth

    NARCIS (Netherlands)

    G.A.C. Schellekens (Gaby)

    2010-01-01

    textabstractIn word of mouth, consumers talk about their experiences with products and services with other consumers. These conversations are important sources of information for consumers. While word of mouth has fascinated researchers and practitioners for many years, little attention has been

  2. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  3. Auditory-motor mapping training as an intervention to facilitate speech output in non-verbal children with autism: a proof of concept study.

    Directory of Open Access Journals (Sweden)

    Catherine Y Wan

    Full Text Available Although up to 25% of children with autism are non-verbal, there are very few interventions that can reliably produce significant improvements in speech output. Recently, a novel intervention called Auditory-Motor Mapping Training (AMMT has been developed, which aims to promote speech production directly by training the association between sounds and articulatory actions using intonation and bimanual motor activities. AMMT capitalizes on the inherent musical strengths of children with autism, and offers activities that they intrinsically enjoy. It also engages and potentially stimulates a network of brain regions that may be dysfunctional in autism. Here, we report an initial efficacy study to provide 'proof of concept' for AMMT. Six non-verbal children with autism participated. Prior to treatment, the children had no intelligible words. They each received 40 individual sessions of AMMT 5 times per week, over an 8-week period. Probe assessments were conducted periodically during baseline, therapy, and follow-up sessions. After therapy, all children showed significant improvements in their ability to articulate words and phrases, with generalization to items that were not practiced during therapy sessions. Because these children had no or minimal vocal output prior to treatment, the acquisition of speech sounds and word approximations through AMMT represents a critical step in expressive language development in children with autism.

  4. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  5. Impact of speech-generating devices on the language development of a child with childhood apraxia of speech: a case study.

    Science.gov (United States)

    Lüke, Carina

    2016-01-01

    The purpose of the study was to evaluate the effectiveness of speech-generating devices (SGDs) on the communication and language development of a 2-year-old boy with severe childhood apraxia of speech (CAS). An A-B design was used over a treatment period of 1 year, followed by three additional follow-up measurements, in order to evaluate the implementation of SGDs in the speech therapy of a 2;7-year-old boy with severe CAS. In total, 53 therapy sessions were videotaped and analyzed to better understand his communicative (operationalized as means of communication) and linguistic (operationalized as intelligibility and consistency of speech-productions, lexical and grammatical development) development. The trend-lines of baseline phase A and intervention phase B were compared and percentage of non-overlapping data points were calculated to verify the value of the intervention. The use of SGDs led to an immediate increase in the communicative development of the child. An increase in all linguistic variables was observed, with a latency effect of eight to nine treatment sessions. The implementation of SGDs in speech therapy has the potential to be highly effective in regards to both communicative and linguistic competencies in young children with severe CAS. Implications for Rehabilitation Childhood apraxia of speech (CAS) is a neurological speech sound disorder which results in significant deficits in speech production and lead to a higher risk for language, reading and spelling difficulties. Speech-generating devices (SGD), as one method of augmentative and alternative communication (AAC), can effectively enhance the communicative and linguistic development of children with severe CAS.

  6. Ultrasound biofeedback treatment for persisting childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Brick, Nickole; Landi, Nicole

    2013-11-01

    The purpose of this study was to evaluate the efficacy of a treatment program that includes ultrasound biofeedback for children with persisting speech sound errors associated with childhood apraxia of speech (CAS). Six children ages 9-15 years participated in a multiple baseline experiment for 18 treatment sessions during which treatment focused on producing sequences involving lingual sounds. Children were cued to modify their tongue movements using visual feedback from real-time ultrasound images. Probe data were collected before, during, and after treatment to assess word-level accuracy for treated and untreated sound sequences. As participants reached preestablished performance criteria, new sequences were introduced into treatment. All participants met the performance criterion (80% accuracy for 2 consecutive sessions) on at least 2 treated sound sequences. Across the 6 participants, performance criterion was met for 23 of 31 treated sequences in an average of 5 sessions. Some participants showed no improvement in untreated sequences, whereas others showed generalization to untreated sequences that were phonetically similar to the treated sequences. Most gains were maintained 2 months after the end of treatment. The percentage of phonemes correct increased significantly from pretreatment to the 2-month follow-up. A treatment program including ultrasound biofeedback is a viable option for improving speech sound accuracy in children with persisting speech sound errors associated with CAS.

  7. Fast Monaural Separation of Speech

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Dyrholm, Mads

    2003-01-01

    a Factorial Hidden Markov Model, with non-stationary assumptions on the source autocorrelations modelled through the Factorial Hidden Markov Model, leads to separation in the monaural case. By extending Hansens work we find that Roweis' assumptions are necessary for monaural speech separation. Furthermore we...

  8. A Computational Approach to the Interpretation of Indirect Speech Acts

    NARCIS (Netherlands)

    Beun, R.J.; Eijk, R.M. van; Meyer, J-J.Ch.; Vergunst, N.L.

    2006-01-01

    An Indirect Speech Act (ISA) is an utterance that conveys a message that is different from its literal meaning, often for reasons of politeness or subtlety. The DenK-system provides us with a non-compositional way to look at Indirect Speech Acts that contain modal verbs. We can extract the

  9. Impaired self-monitoring of inner speech in schizophrenia patients with verbal hallucinations and in non-clinical individuals prone to hallucinations

    Directory of Open Access Journals (Sweden)

    Gildas Brébion

    2016-09-01

    Full Text Available Background: Previous research has shown that various memory errors reflecting failure in the self-monitoring of speech were associated with auditory/verbal hallucinations in schizophrenia patients and with proneness to hallucinations in non-clinical individuals. Method: We administered to 57 schizophrenia patients and 60 healthy participants a verbal memory task involving free recall and recognition of lists of words with different structures (high-frequency, low-frequency, and semantically-organisable words. Extra-list intrusions in free recall were tallied, and the response bias reflecting tendency to make false recognitions of non-presented words was computed for each list. Results: In the male patient subsample, extra-list intrusions were positively associated with verbal hallucinations and inversely associated with negative symptoms. In the healthy participants the extra-list intrusions were positively associated with proneness to hallucinations. A liberal response bias in the recognition of the high-frequency words was associated with verbal hallucinations in male patients and with proneness to hallucinations in healthy men. Meanwhile, a conservative response bias for these high-frequency words was associated with negative symptoms in male patients and with social anhedonia in healthy men. Conclusions: Misattribution of inner speech to an external source, reflected by false recollection of familiar material, seems to underlie both clinical and non-clinical hallucinations. Further, both clinical and non-clinical negative symptoms may exert on verbal memory errors an effect opposite to that of hallucinations.

  10. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  11. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  12. Oral motor deficits in speech-impaired children with autism

    Science.gov (United States)

    Belmonte, Matthew K.; Saxena-Chandhok, Tanushree; Cherian, Ruth; Muneer, Reema; George, Lisa; Karanth, Prathibha

    2013-01-01

    Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual. PMID:23847480

  13. Oral Motor Deficits in Speech-Impaired Children with Autism

    Directory of Open Access Journals (Sweden)

    Matthew K Belmonte

    2013-07-01

    Full Text Available Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive versus expressive speech / language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills and 90 (for oral motor skills typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual.

  14. Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones

    Science.gov (United States)

    Heinzen, Christina Carolyn

    The current study employed behavioral and event-related potential (ERP) measures to investigate brain plasticity associated with second-language (L2) phonetic learning based on an adaptive computer training program. The program utilized the acoustic characteristics of Infant-Directed Speech (IDS) to train monolingual American English-speaking listeners to perceive Mandarin lexical tones. Behavioral identification and discrimination tasks were conducted using naturally recorded speech, carefully controlled synthetic speech, and non-speech control stimuli. The ERP experiments were conducted with selected synthetic speech stimuli in a passive listening oddball paradigm. Identical pre- and post- tests were administered on nine adult listeners, who completed two-to-three hours of perceptual training. The perceptual training sessions used pair-wise lexical tone identification, and progressed through seven levels of difficulty for each tone pair. The levels of difficulty included progression in speaker variability from one to four speakers and progression through four levels of acoustic exaggeration of duration, pitch range, and pitch contour. Behavioral results for the natural speech stimuli revealed significant training-induced improvement in identification of Tones 1, 3, and 4. Improvements in identification of Tone 4 generalized to novel stimuli as well. Additionally, comparison between discrimination of across-category and within-category stimulus pairs taken from a synthetic continuum revealed a training-induced shift toward more native-like categorical perception of the Mandarin lexical tones. Analysis of the Mismatch Negativity (MMN) responses in the ERP data revealed increased amplitude and decreased latency for pre-attentive processing of across-category discrimination as a result of training. There were also laterality changes in the MMN responses to the non-speech control stimuli, which could reflect reallocation of brain resources in processing pitch patterns

  15. Attitudes toward speech disorders: sampling the views of Cantonese-speaking Americans.

    Science.gov (United States)

    Bebout, L; Arthur, B

    1997-01-01

    Speech-language pathologists who serve clients from cultural backgrounds that are not familiar to them may encounter culturally influenced attitudinal differences. A questionnaire with statements about 4 speech disorders (dysfluency, cleft pallet, speech of the deaf, and misarticulations) was given to a focus group of Chinese Americans and a comparison group of non-Chinese Americans. The focus group was much more likely to believe that persons with speech disorders could improve their own speech by "trying hard," was somewhat more likely to say that people who use deaf speech and people with cleft palates might be "emotionally disturbed," and generally more likely to view deaf speech as a limitation. The comparison group was more pessimistic about stuttering children's acceptance by their peers than was the focus group. The two subject groups agreed about other items, such as the likelihood that older children with articulation problems are "less intelligent" than their peers.

  16. The origin of mouth-exhaled ammonia.

    Science.gov (United States)

    Chen, W; Metsälä, M; Vaittinen, O; Halonen, L

    2014-09-01

    It is known that the oral cavity is a production site for mouth-exhaled NH3. However, the mechanism of NH3 production in the oral cavity has been unclear. Since bacterial urease in the oral cavity has been found to produce ammonia from oral fluid urea, we hypothesize that oral fluid urea is the origin of mouth-exhaled NH3. Our results show that under certain conditions a strong correlation exists between oral fluid urea and oral fluid ammonia (NH4(+)+NH3) (rs = 0.77, p oral fluid NH3 and mouth-exhaled NH3 (rs = 0.81, p oral fluid pH. Bacterial urease catalyses the hydrolysis of oral fluid urea to ammonia (NH4(+)+NH3). Oral fluid ammonia (NH4(+)+NH3) and pH determine the concentration of oral fluid NH3, which evaporates from oral fluid into gas phase and turns to mouth-exhaled NH3.

  17. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  18. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  19. Electronic Word-of-Mouth Communication and Consumer Behaviour

    DEFF Research Database (Denmark)

    Pedersen, Signe Tegtmeier; Razmerita, Liana; Colleoni, Elanor

    2014-01-01

    The rapid adoption of social media, along with the easy access to peer information and interactions, has resulted in massive online word-of-mouth communication. These interactions among consumers have an increasing power over the success or failure of companies and brands. Drawing upon word-of-mouth...... communication and consumer behaviour theories, this paper investigates the use of word-of-mouth communication through social media among a group of Danish consumers. The findings suggest that electronic word-of-mouth communication among friends and peers affect consumer behaviour. Additionally, peer...... communication is perceived as more objective and therefore found more reliable than companies’ brand communication. Furthermore, negative word-of-mouth is perceived as more trustworthy compared to positive messages, which are often believed to be too subjective. The research findings emphasise the importance...

  20. Home range use and movement patterns of non-native feral goats in a tropical island montane dry landscape

    Science.gov (United States)

    Mark W. Chynoweth; Christopher A. Lepczyk; Creighton M. Litton; Steven C. Hess; James R. Kellner; Susan Cordell; Lalit Kumar

    2015-01-01

    Advances in wildlife telemetry and remote sensing technology facilitate studies of broad-scale movements of ungulates in relation to phenological shifts in vegetation. In tropical island dry landscapes, home range use and movements of non-native feral goats (Capra hircus) are largely unknown, yet this information is important to help guide the...

  1. Sensorimotor speech disorders in Parkinson's disease: Programming and execution deficits

    Directory of Open Access Journals (Sweden)

    Karin Zazo Ortiz

    Full Text Available ABSTRACT Introduction: Dysfunction in the basal ganglia circuits is a determining factor in the physiopathology of the classic signs of Parkinson's disease (PD and hypokinetic dysarthria is commonly related to PD. Regarding speech disorders associated with PD, the latest four-level framework of speech complicates the traditional view of dysarthria as a motor execution disorder. Based on findings that dysfunctions in basal ganglia can cause speech disorders, and on the premise that the speech deficits seen in PD are not related to an execution motor disorder alone but also to a disorder at the motor programming level, the main objective of this study was to investigate the presence of sensorimotor disorders of programming (besides the execution disorders previously described in PD patients. Methods: A cross-sectional study was conducted in a sample of 60 adults matched for gender, age and education: 30 adult patients diagnosed with idiopathic PD (PDG and 30 healthy adults (CG. All types of articulation errors were reanalyzed to investigate the nature of these errors. Interjections, hesitations and repetitions of words or sentences (during discourse were considered typical disfluencies; blocking, episodes of palilalia (words or syllables were analyzed as atypical disfluencies. We analysed features including successive self-initiated trial, phoneme distortions, self-correction, repetition of sounds and syllables, prolonged movement transitions, additions or omissions of sounds and syllables, in order to identify programming and/or execution failures. Orofacial agility was also investigated. Results: The PDG had worse performance on all sensorimotor speech tasks. All PD patients had hypokinetic dysarthria. Conclusion: The clinical characteristics found suggest both execution and programming sensorimotor speech disorders in PD patients.

  2. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  3. Hand, Foot, and Mouth Disease

    Centers for Disease Control (CDC) Podcasts

    2013-08-08

    Hand, foot, and mouth disease is a contagious illness that mainly affects children under five. In this podcast, Dr. Eileen Schneider talks about the symptoms of hand, foot, and mouth disease, how it spreads, and ways to help protect yourself and your children from getting infected with the virus.  Created: 8/8/2013 by National Center for Immunization and Respiratory Diseases (NCIRD).   Date Released: 8/8/2013.

  4. The effect of mouth breathing on chewing efficiency.

    Science.gov (United States)

    Nagaiwa, Miho; Gunjigake, Kaori; Yamaguchi, Kazunori

    2016-03-01

    To examine the effect of mouth breathing on chewing efficiency by evaluating masticatory variables. Ten adult nasal breathers with normal occlusion and no temporomandibular dysfunction were selected. Subjects were instructed to bite the chewing gum on the habitual side. While breathing through the mouth and nose, the glucide elution from the chewing gum, number of chewing strokes, duration of chewing, and electromyography (EMG) activity of the masseter muscle were evaluated as variables of masticatory efficiency. The durations required for the chewing of 30, 60, 90, 120, 180, and 250 strokes were significantly (P chewing stroke between nose and mouth breathings. The glucide elution rates for 1- and 3-minute chewing were significantly (P chewing between nose and mouth breathings. While chewing for 1, 3, and 5 minutes, the chewing stroke and EMG activity of the masseter muscle were significantly (P chewing to obtain higher masticatory efficiency when breathing through the mouth. Therefore, mouth breathing will decrease the masticatory efficiency if the duration of chewing is restricted in everyday life.

  5. A Danish open-set speech corpus for competing-speech studies

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Dau, Torsten; Neher, Tobias

    2014-01-01

    Studies investigating speech-on-speech masking effects commonly use closed-set speech materials such as the coordinate response measure [Bolia et al. (2000). J. Acoust. Soc. Am. 107, 1065-1066]. However, these studies typically result in very low (i.e., negative) speech recognition thresholds (SRTs......) when the competing speech signals are spatially separated. To achieve higher SRTs that correspond more closely to natural communication situations, an open-set, low-context, multi-talker speech corpus was developed. Three sets of 268 unique Danish sentences were created, and each set was recorded...... with one of three professional female talkers. The intelligibility of each sentence in the presence of speech-shaped noise was measured. For each talker, 200 approximately equally intelligible sentences were then selected and systematically distributed into 10 test lists. Test list homogeneity was assessed...

  6. Neuronal basis of speech comprehension.

    Science.gov (United States)

    Specht, Karsten

    2014-01-01

    Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Evaluation of Multiplexed Foot-and-Mouth Disease Nonstructural Protein Antibody Assay Against Standardized Bovine Serum Panel

    Energy Technology Data Exchange (ETDEWEB)

    Perkins, J; Parida, S; Clavijo, A

    2007-05-14

    Liquid array technology has previously been used to show proof-of-principle of a multiplexed non structural protein serological assay to differentiate foot-and-mouth infected and vaccinated animals. The current multiplexed assay consists of synthetically produced peptide signatures 3A, 3B and 3D and recombinant protein signature 3ABC in combination with four controls. To determine diagnostic specificity of each signature in the multiplex, the assay was evaluated against a naive population (n = 104) and a vaccinated population (n = 94). Subsequently, the multiplexed assay was assessed using a panel of bovine sera generated by the World Reference Laboratory for foot-and-mouth disease in Pirbright, UK. This sera panel has been used to assess the performance of other singleplex ELISA-based non-structural protein antibody assays. The 3ABC signature in the multiplexed assay showed comparative performance to a commercially available non-structural protein 3ABC ELISA (Cedi test{reg_sign}) and additional information pertaining to the relative diagnostic sensitivity of each signature in the multiplex is acquired in one experiment. The encouraging results of the evaluation of the multiplexed assay against a panel of diagnostically relevant samples promotes further assay development and optimization to generate an assay for routine use in foot-and-mouth disease surveillance.

  8. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  9. Speech Recognition for the iCub Platform

    Directory of Open Access Journals (Sweden)

    Bertrand Higy

    2018-02-01

    Full Text Available This paper describes open source software (available at https://github.com/robotology/natural-speech to build automatic speech recognition (ASR systems and run them within the YARP platform. The toolkit is designed (i to allow non-ASR experts to easily create their own ASR system and run it on iCub and (ii to build deep learning-based models specifically addressing the main challenges an ASR system faces in the context of verbal human–iCub interactions. The toolkit mostly consists of Python, C++ code and shell scripts integrated in YARP. As additional contribution, a second codebase (written in Matlab is provided for more expert ASR users who want to experiment with bio-inspired and developmental learning-inspired ASR systems. Specifically, we provide code for two distinct kinds of speech recognition: “articulatory” and “unsupervised” speech recognition. The first is largely inspired by influential neurobiological theories of speech perception which assume speech perception to be mediated by brain motor cortex activities. Our articulatory systems have been shown to outperform strong deep learning-based baselines. The second type of recognition systems, the “unsupervised” systems, do not use any supervised information (contrary to most ASR systems, including our articulatory systems. To some extent, they mimic an infant who has to discover the basic speech units of a language by herself. In addition, we provide resources consisting of pre-trained deep learning models for ASR, and a 2.5-h speech dataset of spoken commands, the VoCub dataset, which can be used to adapt an ASR system to the typical acoustic environments in which iCub operates.

  10. Instantaneous Fundamental Frequency Estimation with Optimal Segmentation for Nonstationary Voiced Speech

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    In speech processing, the speech is often considered stationary within segments of 20–30 ms even though it is well known not to be true. In this paper, we take the non-stationarity of voiced speech into account by using a linear chirp model to describe the speech signal. We propose a maximum...... likelihood estimator of the fundamental frequency and chirp rate of this model, and show that it reaches the Cramer-Rao bound. Since the speech varies over time, a fixed segment length is not optimal, and we propose to make a segmentation of the signal based on the maximum a posteriori (MAP) criterion. Using...... of the chirp model than the harmonic model to the speech signal. The methods are based on an assumption of white Gaussian noise, and, therefore, two prewhitening filters are also proposed....

  11. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  12. Longitudinal decline in speech production in Parkinson's disease spectrum disorders.

    Science.gov (United States)

    Ash, Sharon; Jester, Charles; York, Collin; Kofman, Olga L; Langey, Rachel; Halpin, Amy; Firn, Kim; Dominguez Perez, Sophia; Chahine, Lama; Spindler, Meredith; Dahodwala, Nabila; Irwin, David J; McMillan, Corey; Weintraub, Daniel; Grossman, Murray

    2017-08-01

    We examined narrative speech production longitudinally in non-demented (n=15) and mildly demented (n=8) patients with Parkinson's disease spectrum disorder (PDSD), and we related increasing impairment to structural brain changes in specific language and motor regions. Patients provided semi-structured speech samples, describing a standardized picture at two time points (mean±SD interval=38±24months). The recorded speech samples were analyzed for fluency, grammar, and informativeness. PDSD patients with dementia exhibited significant decline in their speech, unrelated to changes in overall cognitive or motor functioning. Regression analysis in a subset of patients with MRI scans (n=11) revealed that impaired language performance at Time 2 was associated with reduced gray matter (GM) volume at Time 1 in regions of interest important for language functioning but not with reduced GM volume in motor brain areas. These results dissociate language and motor systems and highlight the importance of non-motor brain regions for declining language in PDSD. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Cortical thickness in children receiving intensive therapy for idiopathic apraxia of speech.

    Science.gov (United States)

    Kadis, Darren S; Goshulak, Debra; Namasivayam, Aravind; Pukonen, Margit; Kroll, Robert; De Nil, Luc F; Pang, Elizabeth W; Lerch, Jason P

    2014-03-01

    Children with idiopathic apraxia experience difficulties planning the movements necessary for intelligible speech. There is increasing evidence that targeted early interventions, such as Prompts for Restructuring Oral Muscular Phonetic Targets (PROMPT), can be effective in treating these disorders. In this study, we investigate possible cortical thickness correlates of idiopathic apraxia of speech in childhood, and changes associated with participation in an 8-week block of PROMPT therapy. We found that children with idiopathic apraxia (n = 11), aged 3-6 years, had significantly thicker left supramarginal gyri than a group of typically-developing age-matched controls (n = 11), t(20) = 2.84, p ≤ 0.05. Over the course of therapy, the children with apraxia (n = 9) experienced significant thinning of the left posterior superior temporal gyrus (canonical Wernicke's area), t(8) = 2.42, p ≤ 0.05. This is the first study to demonstrate experience-dependent structural plasticity in children receiving therapy for speech sound disorders.

  14. Detecting Nasal Vowels in Speech Interfaces Based on Surface Electromyography.

    Directory of Open Access Journals (Sweden)

    João Freitas

    Full Text Available Nasality is a very important characteristic of several languages, European Portuguese being one of them. This paper addresses the challenge of nasality detection in surface electromyography (EMG based speech interfaces. We explore the existence of useful information about the velum movement and also assess if muscles deeper down in the face and neck region can be measured using surface electrodes, and the best electrode location to do so. The procedure we adopted uses Real-Time Magnetic Resonance Imaging (RT-MRI, collected from a set of speakers, providing a method to interpret EMG data. By ensuring compatible data recording conditions, and proper time alignment between the EMG and the RT-MRI data, we are able to accurately estimate the time when the velum moves and the type of movement when a nasal vowel occurs. The combination of these two sources revealed interesting and distinct characteristics in the EMG signal when a nasal vowel is uttered, which motivated a classification experiment. Overall results of this experiment provide evidence that it is possible to detect velum movement using sensors positioned below the ear, between mastoid process and the mandible, in the upper neck region. In a frame-based classification scenario, error rates as low as 32.5% for all speakers and 23.4% for the best speaker have been achieved, for nasal vowel detection. This outcome stands as an encouraging result, fostering the grounds for deeper exploration of the proposed approach as a promising route to the development of an EMG-based speech interface for languages with strong nasal characteristics.

  15. Using leap motion to investigate the emergence of structure in speech and language.

    Science.gov (United States)

    Eryilmaz, Kerem; Little, Hannah

    2017-10-01

     In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergenceof speech structure. These signal spaces need to be continuous, non-discretized spaces from which discrete unitsand patterns can emerge. They need to be dissimilar from-but comparable with-the vocal tract, in order tominimize interference from pre-existing linguistic knowledge, while informing us about language. This is a hardbalance to strike. This article outlines a new approach that uses the Leap Motion, an infrared controller that canconvert manual movement in 3d space into sound. The signal space using this approach is more flexible than signalspaces in previous attempts. Further, output data using this approach is simpler to arrange and analyze. Theexperimental interface was built using free, and mostly open- source libraries in Python. We provide our sourcecode for other researchers as open source.

  16. Burning mouth syndrome: Present perspective

    OpenAIRE

    Ramesh Parajuli

    2015-01-01

    Introduction: Burning mouth syndrome is characterized by chronic oral pain or burning sensation affecting the oral mucosa in the absence of obvious visible mucosal lesions. Patient presenting with the burning mouth sensation or pain is frequently encountered in clinical practice which poses a challenge to the treating clinician. Its exact etiology remains unknown which probably has multifactorial origin. It often affects middle or old age women and it may be accompanied by xerostomia and alte...

  17. Non-invasive mapping of bilateral motor speech areas using navigated transcranial magnetic stimulation and functional magnetic resonance imaging.

    Science.gov (United States)

    Könönen, Mervi; Tamsi, Niko; Säisänen, Laura; Kemppainen, Samuli; Määttä, Sara; Julkunen, Petro; Jutila, Leena; Äikiä, Marja; Kälviäinen, Reetta; Niskanen, Eini; Vanninen, Ritva; Karjalainen, Pasi; Mervaala, Esa

    2015-06-15

    Navigated transcranial magnetic stimulation (nTMS) is a modern precise method to activate and study cortical functions noninvasively. We hypothesized that a combination of nTMS and functional magnetic resonance imaging (fMRI) could clarify the localization of functional areas involved with motor control and production of speech. Navigated repetitive TMS (rTMS) with short bursts was used to map speech areas on both hemispheres by inducing speech disruption during number recitation tasks in healthy volunteers. Two experienced video reviewers, blinded to the stimulated area, graded each trial offline according to possible speech disruption. The locations of speech disrupting nTMS trials were overlaid with fMRI activations of word generation task. Speech disruptions were produced on both hemispheres by nTMS, though there were more disruptive stimulation sites on the left hemisphere. Grade of the disruptions varied from subjective sensation to mild objectively recognizable disruption up to total speech arrest. The distribution of locations in which speech disruptions could be elicited varied among individuals. On the left hemisphere the locations of disturbing rTMS bursts with reviewers' verification followed the areas of fMRI activation. Similar pattern was not observed on the right hemisphere. The reviewer-verified speech disruptions induced by nTMS provided clinically relevant information, and fMRI might explain further the function of the cortical area. nTMS and fMRI complement each other, and their combination should be advocated when assessing individual localization of speech network. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Word of mouth komunikacija

    Directory of Open Access Journals (Sweden)

    Žnideršić-Kovač Ružica

    2009-01-01

    Full Text Available Consumers' buying decision is very complex multistep process in which a lot of factors have significant impact. Traditional approach to the problem of communication between a company and its consumers, implies usage of marketing mix instruments, mostly promotion mix, in order to achieve positive purchase decision. Formal communication between company and consumers is dominant comparing to informal communication, and even in marketing literature there is not enough attention paid to this type of communication such as Word of Mouth. Numerous of research shows that consumers emphasize crucial impact of Word of Mouth on their buying decision. .

  19. Twisting Tongues to Test for Conflict-Monitoring in Speech Production

    Directory of Open Access Journals (Sweden)

    Daniel eAcheson

    2014-04-01

    Full Text Available A number of recent studies have hypothesized that monitoring in speech production may occur via domain-general mechanisms responsible for the detection of response conflict. Outside of language, two ERP components have consistently been elicited in conflict-inducing tasks (e.g., the flanker task: The stimulus-locked N2 on correct trials, and the response-locked error-related negativity (ERN. The present investigation used these electrophysiological markers to test whether a common response conflict monitor is responsible for monitoring in speech and non-speech tasks.EEG was recorded while participants performed a tongue twister (TT task and a manual version of the flanker task. In the TT task, people rapidly read sequences of four nonwords arranged in TT and non-TT patterns three times. In the flanker task, people responded with a left/right button press to a center-facing arrow, and conflict was manipulated by the congruency of the flanking arrows.Behavioral results showed typical effects of both tasks, with increased error rates and slower speech onset times for TT relative to non-TT trials and for incongruent relative to congruent flanker trials. In the flanker task, stimulus-locked EEG analyses replicated previous results, with a larger N2 for incongruent relative to congruent trials, and a response-locked ERN. In the TT task, stimulus-locked analyses revealed broad, frontally-distributed differences beginning around 50 ms and lasting until just before speech initiation, with TT trials more negative than non-TT trials; response-locked analyses revealed an ERN. Correlation across these measures showed some correlations within a task, but little evidence of systematic cross-task correlation. Although the present results do not speak against conflict signals from the production system serving as cues to self-monitoring, they are not consistent with signatures of response conflict being mediated by a single, domain-general conflict monitor.

  20. Filled pause refinement based on the pronunciation probability for lecture speech.

    Directory of Open Access Journals (Sweden)

    Yan-Hua Long

    Full Text Available Nowadays, although automatic speech recognition has become quite proficient in recognizing or transcribing well-prepared fluent speech, the transcription of speech that contains many disfluencies remains problematic, such as spontaneous conversational and lecture speech. Filled pauses (FPs are the most frequently occurring disfluencies in this type of speech. Most recent studies have shown that FPs are widely believed to increase the error rates for state-of-the-art speech transcription, primarily because most FPs are not well annotated or provided in training data transcriptions and because of the similarities in acoustic characteristics between FPs and some common non-content words. To enhance the speech transcription system, we propose a new automatic refinement approach to detect FPs in British English lecture speech transcription. This approach combines the pronunciation probabilities for each word in the dictionary and acoustic language model scores for FP refinement through a modified speech recognition forced-alignment framework. We evaluate the proposed approach on the Reith Lectures speech transcription task, in which only imperfect training transcriptions are available. Successful results are achieved for both the development and evaluation datasets. Acoustic models trained on different styles of speech genres have been investigated with respect to FP refinement. To further validate the effectiveness of the proposed approach, speech transcription performance has also been examined using systems built on training data transcriptions with and without FP refinement.

  1. A WORD-OF-MOUSE APPROACH FOR WORD-OF-MOUTH MEASUREMENT

    OpenAIRE

    Andreia Gabriela ANDREI

    2012-01-01

    Despite of the fact that word-of-mouth phenomenon gained unseen dimensions, only few studies have focused on its measurement and only three of them developed a word-of-mouth construct. Our study develops a bi-dimensional scale which assigns usual word-of-mouth mechanisms available in online networking sites (eg: Recommend, Share, Like, Comment) into the WOM (+) - positive word-of-mouth valence dimension - respectively into the WOM (-) - negative word-of-mouth valence dimension. We adapted e-W...

  2. Word of mouth marketing applications on the internet

    OpenAIRE

    Gülmez, Mustafa

    2011-01-01

    Word of mouth marketing, also called WOMM in English, is a marketing strategyform via oral or written in which consumers share&spread other people aboutproduct or firm. Word of mouth marketing is an extremely important factor in theconsumer’s final purchase decision in the conscious societies on the internet. Thispaper aims to evaluate word of mouth marketing applications on the internet.

  3. Auditory and Cognitive Factors Underlying Individual Differences in Aided Speech-Understanding among Older Adults

    Directory of Open Access Journals (Sweden)

    Larry E. Humes

    2013-10-01

    Full Text Available This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male ranging in age from 60 to 86 (mean = 69.2. Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures, psychophysical (17 measures, and speech-understanding (9 measures, as well as the Speech, Spatial and Qualities of Hearing (SSQ self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference. All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI, and performance on the text-recognition-threshold (TRT task (a visual analog of interrupted speech recognition. These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance.

  4. The Influence of Serial Carbohydrate Mouth Rinsing on Power Output during a Cycle Sprint.

    Science.gov (United States)

    Phillips, Shaun M; Findlay, Scott; Kavaliauskas, Mykolas; Grant, Marie Clare

    2014-05-01

    The objective of the study was to investigate the influence of serial administration of a carbohydrate (CHO) mouth rinse on performance, metabolic and perceptual responses during a cycle sprint. Twelve physically active males (mean (± SD) age: 23.1 (3.0) years, height: 1.83 (0.07) m, body mass (BM): 86.3 (13.5) kg) completed the following mouth rinse trials in a randomized, counterbalanced, double-blind fashion; 1. 8 x 5 second rinses with a 25 ml CHO (6% w/v maltodextrin) solution, 2. 8 x 5 second rinses with a 25 ml placebo (PLA) solution. Following mouth rinse administration, participants completed a 30 second sprint on a cycle ergometer against a 0.075 g·kg(-1) BM resistance. Eight participants achieved a greater peak power output (PPO) in the CHO trial, resulting in a significantly greater PPO compared with PLA (13.51 ± 2.19 vs. 13.20 ± 2.14 W·kg(-1), p 0.05). No significant between-trials difference was reported for fatigue index, perceived exertion, arousal and nausea levels, or blood lactate and glucose concentrations. Serial administration of a CHO mouth rinse may significantly improve PPO during a cycle sprint. This improvement appears confined to the first 5 seconds of the sprint, and may come at a greater relative cost for the remainder of the sprint. Key pointsThe paper demonstrates that repeated administration of a carbohydrate mouth rinse can significantly improve peak power output during a single 30 second cycle sprint.The ergogenic effect of the carbohydrate mouth rinse may relate to the duration of exposure of the oral cavity to the mouth rinse, and associated greater stimulation of oral carbohydrate receptors.The significant increase in peak power output with the carbohydrate mouth rinse may come at a relative cost for the remainder of the sprint, evidenced by non-significantly lower mean power output and a greater fatigue index in the carbohydrate vs. placebo trial.Serial administration of a carbohydrate mouth rinse may be beneficial for

  5. WHO ARE FANS OF FACEBOOK FAN PAGES? AN ELECTRONIC WORD-OF-MOUTH COMMUNICATION PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Xiao Hu

    2014-12-01

    Full Text Available Given its great business value and popularity, Facebook fan pages have attracted more and more attention in both industry and academia. Fans of Facebook fan pages play an important role in electronic word-of-mouth (eWOM communication. This study focused on the population of fans on Facebook fan pages and examined the differences between fans and non-fans in terms of demographics, social network sites (SNS use, Internet use, and online shopping behaviors. The results indicated that fans used SNS more frequently than non-fans. Additionally, from the eWOM perspective, the researchers moderated product types in the model of people’s word-of-mouth (WOM preferences and found that people had different preferences for eWOM and traditional WOM for different products. Traditional WOM is still the most important source of information for people when shopping online.

  6. A multigenerational family study of oral and hand motor sequencing ability provides evidence for a familial speech sound disorder subtype

    Science.gov (United States)

    Peter, Beate; Raskind, Wendy H.

    2011-01-01

    Purpose To evaluate phenotypic expressions of speech sound disorder (SSD) in multigenerational families with evidence of familial forms of SSD. Method Members of five multigenerational families (N = 36) produced rapid sequences of monosyllables and disyllables and tapped computer keys with repetitive and alternating movements. Results Measures of repetitive and alternating motor speed were correlated within and between the two motor systems. Repetitive and alternating motor speeds increased in children and decreased in adults as a function of age. In two families with children who had severe speech deficits consistent with disrupted praxis, slowed alternating, but not repetitive, oral movements characterized most of the affected children and adults with a history of SSD, and slowed alternating hand movements were seen in some of the biologically related participants as well. Conclusion Results are consistent with a familial motor-based SSD subtype with incomplete penetrance, motivating new clinical questions about motor-based intervention not only in the oral but also the limb system. PMID:21909176

  7. Sound frequency affects speech emotion perception: results from congenital amusia.

    Science.gov (United States)

    Lolli, Sydney L; Lewenstein, Ari D; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

  8. What Drives Word of Mouth: A Multi-Disciplinary Perspective

    NARCIS (Netherlands)

    Verlegh, Peeter W J; Moldovan, Sarit

    2008-01-01

    The article presents abstracts on word-of-mouth advertising-related topics which include the different roles of product originality and usefulness in generating word of mouth, understanding the way consumers deal with the tension between authenticity and commercialism in seeded word of mouth

  9. Speech monitoring and phonologically-mediated eye gaze in language perception and production: a comparison using printed word eye-tracking

    Science.gov (United States)

    Gauvin, Hanna S.; Hartsuiker, Robert J.; Huettig, Falk

    2013-01-01

    The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. PMID:24339809

  10. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  11. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  12. Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech.

    Directory of Open Access Journals (Sweden)

    Robert Boldt

    Full Text Available Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI. An intersubject-correlation (ISC map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.

  13. Directive and Non-Directive Movement in Child Therapy.

    Science.gov (United States)

    Krason, Katarzyna; Szafraniec, Grazyna

    1999-01-01

    Presents a new authorship method of child therapy based on visualization through motion. Maintains that this method stimulates motor development and musical receptiveness, and promotes personality development. Suggests that improvised movement to music facilitates the projection mechanism and that directed movement starts the channeling phase.…

  14. Motor functions and adaptive behaviour in children with childhood apraxia of speech.

    Science.gov (United States)

    Tükel, Şermin; Björelius, Helena; Henningsson, Gunilla; McAllister, Anita; Eliasson, Ann Christin

    2015-01-01

    Undiagnosed motor and behavioural problems have been reported for children with childhood apraxia of speech (CAS). This study aims to understand the extent of these problems by determining the profile of and relationships between speech/non-speech oral, manual and overall body motor functions and adaptive behaviours in CAS. Eighteen children (five girls and 13 boys) with CAS, 4 years 4 months to 10 years 6 months old, participated in this study. The assessments used were the Verbal Motor Production Assessment for Children (VMPAC), Bruininks-Oseretsky Test of Motor Proficiency (BOT-2) and Adaptive Behaviour Assessment System (ABAS-II). Median result of speech/non-speech oral motor function was between -1 and -2 SD of the mean VMPAC norms. For BOT-2 and ABAS-II, the median result was between the mean and -1 SD of test norms. However, on an individual level, many children had co-occurring difficulties (below -1 SD of the mean) in overall and manual motor functions and in adaptive behaviour, despite few correlations between sub-tests. In addition to the impaired speech motor output, children displayed heterogeneous motor problems suggesting the presence of a global motor deficit. The complex relationship between motor functions and behaviour may partly explain the undiagnosed developmental difficulties in CAS.

  15. A machine-hearing system exploiting head movements for binaural sound localisation in reverberant conditions

    DEFF Research Database (Denmark)

    May, Tobias; Ma, Ning; Wierstorf, Hagen

    2015-01-01

    This paper is concerned with machine localisation of multiple active speech sources in reverberant environments using two (binaural) microphones. Such conditions typically present a problem for ‘classical’ binaural models. Inspired by the human ability to utilise head movements, the current study...

  16. THE MANAGEMENT OF LIMITED MANDIBULAR MOVEMENT CAUSED BY CONDYLAR FRACTURE WITH REPOSITIONING SPLINT

    Directory of Open Access Journals (Sweden)

    Ira Tanti

    2015-06-01

    Full Text Available Fractures of the neck of condyle usually are the result of a blow to the mandible. A lateral blow to the body of the mandible commonly causes a contralateral condyle fracture. There are many signs and symptoms of a condylar fracture, for example crepitation, deviation of the mandible to the side of injury, and spasm of the associated group of muscles. These will result in a functional disability, which is usually seen as a limited mandibular movement. This paper reported a patient with a fracture of the condylar neck. Patient had been treated with closed reduction and immobilization for 2 months. After that, she felt that her bite was changed, she could not occlude her teeth well, and she had clicking sound in the right joint when she opened her mouth. Besides that, patient had difficulties to move the mandible to the left side, and she could not open her mouth widely. The patient was treated with a repositioning splint and she had to do some jaw exercises. The purposes were to regain the position of condyle, to reduce the muscle spasm and finally got the normal jaw movement.

  17. New tests of the distal speech rate effect: Examining cross-linguistic generalization

    Directory of Open Access Journals (Sweden)

    Laura eDilley

    2013-12-01

    Full Text Available Recent findings [Dilley and Pitt, 2010. Psych. Science. 21, 1664-1670] have shown that manipulating context speech rate in English can cause entire syllables to disappear or appear perceptually. The current studies tested two rate-based explanations of this phenomenon while attempting to replicate and extend these findings to another language, Russian. In Experiment 1, native Russian speakers listened to Russian sentences which had been subjected to rate manipulations and performed a lexical report task. Experiment 2 investigated speech rate effects in cross-language speech perception; non-native speakers of Russian of both high and low proficiency were tested on the same Russian sentences as in Experiment 1. They decided between two lexical interpretations of a critical portion of the sentence, where one choice contained more phonological material than the other (e.g., /stərʌ'na/ side vs. /strʌ'na/ country. In both experiments, with native and non-native speakers of Russian, context speech rate and the relative duration of the critical sentence portion were found to influence the amount of phonological material perceived. The results support the generalized rate normalization hypothesis, according to which the content perceived in a spectrally ambiguous stretch of speech depends on the duration of that content relative to the surrounding speech, while showing that the findings of Dilley and Pitt (2010 extend to a variety of morphosyntactic contexts and a new language, Russian. Findings indicate that relative timing cues across an utterance can be critical to accurate lexical perception by both native and non-native speakers.

  18. Automatic Smoker Detection from Telephone Speech Signals

    DEFF Research Database (Denmark)

    Poorjam, Amir Hossein; Hesaraki, Soheila; Safavi, Saeid

    2017-01-01

    This paper proposes an automatic smoking habit detection from spontaneous telephone speech signals. In this method, each utterance is modeled using i-vector and non-negative factor analysis (NFA) frameworks, which yield low-dimensional representation of utterances by applying factor analysis...... method is evaluated on telephone speech signals of speakers whose smoking habits are known drawn from the National Institute of Standards and Technology (NIST) 2008 and 2010 Speaker Recognition Evaluation databases. Experimental results over 1194 utterances show the effectiveness of the proposed approach...... for the automatic smoking habit detection task....

  19. Formulation and evaluation of aceclofenac mouth-dissolving tablet

    Directory of Open Access Journals (Sweden)

    Shailendra Singh Solanki

    2011-01-01

    Full Text Available Aceclofenac has been shown to have potent analgesic and anti-inflammatory activities similar to indomethacin and diclofenac, and due to its preferential Cox-2 blockade, it has a better safety than conventional Non steroidal anti-inflammatory drug (NSAIDs with respect to adverse effect on gastrointestinal and cardiovascular systems. Aceclofenac is superior from other NSAIDs as it has selectivity for Cox-2, a beneficial Cox inhibitor is well tolerated, has better Gastrointestinal (GI tolerability and improved cardiovascular safety when compared with other selective Cox-2 inhibitor. To provide the patient with the most convenient mode of administration, there is need to develop a fast-disintegrating dosage form, particularly one that disintegrates and dissolves/disperses in saliva and can be administered without water, anywhere, any time. Such tablets are also called as "melt in mouth tablet." Direct compression, freeze drying, sublimation, spray drying, tablet molding, disintegrant addition, and use of sugar-based excipients are technologies available for mouth-dissolving tablet. Mouth-dissolving tablets of aceclofenac were prepared with two different techniques, wet granulation and direct compression, in which different formulations were prepared with varying concentration of excipients. These tablets were evaluated for their friability, hardness, wetting time, and disintegration time; the drug release profile was studied in buffer Phosphate buffered Saline (PBS pH 7.4. Direct compression batch C3 gave far better dissolution than the wet granulation Batch F2, which released only 75.37% drug, and C3, which released 89.69% drug in 90 minutes.

  20. Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.

    Science.gov (United States)

    Schoenmaker, Esther; van de Par, Steven

    2016-01-01

    Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.

  1. An experimental Dutch keyboard-to-speech system for the speech impaired

    NARCIS (Netherlands)

    Deliege, R.J.H.

    1989-01-01

    An experimental Dutch keyboard-to-speech system has been developed to explor the possibilities and limitations of Dutch speech synthesis in a communication aid for the speech impaired. The system uses diphones and a formant synthesizer chip for speech synthesis. Input to the system is in

  2. The speech choir in central European theatres and literary-musical works in the first third of the 20th century

    Directory of Open Access Journals (Sweden)

    Meyer-Kalkus Reinhart

    2015-01-01

    Full Text Available Speech choirs emerged as an offshoot of the choral gatherings of a wider youth musical and singing movement in the first half of the 20th century. The occasionally expressed opinion that choral speaking was cultivated primarily by the Hitler Youth and pressed into service on behalf of Nazi nationalist and racist propaganda is, historically, only partially accurate. The primary forces of choral speaking in Germany were, from 1919, the Social Democratic workers’ and cultural movement and the Catholic youth groups, in addition to elementary and secondary schools. The popularity of speech choirs around 1930 was also echoed in the music of the time. Compositions for musical speech choirs were produced by composers like Heinz Thiessen, Arnold Schönberg, Ernst Toch, Carl Orff, Vladimir Vogel, Luigi Nono, Helmut Lachenmann and Wolfgang Rihm. Moving forward from the Schönberg School, the post-1945 new music thereby opens up the spectrum of vocal expressions of sound beyond that of the singing voice. It does so not only for solo voices but for the choir as well.

  3. Association between maximal hamstring strength and hamstring muscle pre-activity during a movement associated with non-contact ACL injury

    DEFF Research Database (Denmark)

    Skov Husted, Rasmus; Bencke, Jesper; Thorborg, Kristian

    2014-01-01

    Introduction Reduced hamstring pre-activity during side-cutting may predispose for non-contact ACL injury. During the last decade resistance training of the lower limb muscles has become an integral part of ACL injury prevention in e.g. soccer and handball. However, it is not known whether a strong...... hamstring (ACL-agonist) musculature is associated with a high level of hamstring muscle pre-activity during high risk movements such as side-cutting. The purpose of this study was to examine the relationship between hamstring muscle pre-activity recorded during a standardized sidecutting maneuver...... translate into high levels of muscle pre-activity during movements like the sidecutting maneuver. Thus, other exercise modalities (i.e. neuromuscular training) are needed to optimize hamstring muscle pre-activity during movements associated with non-contact ACL injury....

  4. Machine learning classification of medication adherence in patients with movement disorders using non-wearable sensors.

    Science.gov (United States)

    Tucker, Conrad S; Behoora, Ishan; Nembhard, Harriet Black; Lewis, Mechelle; Sterling, Nicholas W; Huang, Xuemei

    2015-11-01

    Medication non-adherence is a major concern in the healthcare industry and has led to increases in health risks and medical costs. For many neurological diseases, adherence to medication regimens can be assessed by observing movement patterns. However, physician observations are typically assessed based on visual inspection of movement and are limited to clinical testing procedures. Consequently, medication adherence is difficult to measure when patients are away from the clinical setting. The authors propose a data mining driven methodology that uses low cost, non-wearable multimodal sensors to model and predict patients' adherence to medication protocols, based on variations in their gait. The authors conduct a study involving Parkinson's disease patients that are "on" and "off" their medication in order to determine the statistical validity of the methodology. The data acquired can then be used to quantify patients' adherence while away from the clinic. Accordingly, this data-driven system may allow for early warnings regarding patient safety. Using whole-body movement data readings from the patients, the authors were able to discriminate between PD patients on and off medication, with accuracies greater than 97% for some patients using an individually customized model and accuracies of 78% for a generalized model containing multiple patient gait data. The proposed methodology and study demonstrate the potential and effectiveness of using low cost, non-wearable hardware and data mining models to monitor medication adherence outside of the traditional healthcare facility. These innovations may allow for cost effective, remote monitoring of treatment of neurological diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Speech Function and Speech Role in Carl Fredricksen's Dialogue on Up Movie

    OpenAIRE

    Rehana, Ridha; Silitonga, Sortha

    2013-01-01

    One aim of this article is to show through a concrete example how speech function and speech role used in movie. The illustrative example is taken from the dialogue of Up movie. Central to the analysis proper form of dialogue on Up movie that contain of speech function and speech role; i.e. statement, offer, question, command, giving, and demanding. 269 dialogue were interpreted by actor, and it was found that the use of speech function and speech role.

  6. Improving the speech intelligibility in classrooms

    Science.gov (United States)

    Lam, Choi Ling Coriolanus

    One of the major acoustical concerns in classrooms is the establishment of effective verbal communication between teachers and students. Non-optimal acoustical conditions, resulting in reduced verbal communication, can cause two main problems. First, they can lead to reduce learning efficiency. Second, they can also cause fatigue, stress, vocal strain and health problems, such as headaches and sore throats, among teachers who are forced to compensate for poor acoustical conditions by raising their voices. Besides, inadequate acoustical conditions can induce the usage of public address system. Improper usage of such amplifiers or loudspeakers can lead to impairment of students' hearing systems. The social costs of poor classroom acoustics will be large to impair the learning of children. This invisible problem has far reaching implications for learning, but is easily solved. Many researches have been carried out that they have accurately and concisely summarized the research findings on classrooms acoustics. Though, there is still a number of challenging questions remaining unanswered. Most objective indices for speech intelligibility are essentially based on studies of western languages. Even several studies of tonal languages as Mandarin have been conducted, there is much less on Cantonese. In this research, measurements have been done in unoccupied rooms to investigate the acoustical parameters and characteristics of the classrooms. The speech intelligibility tests, which based on English, Mandarin and Cantonese, and the survey were carried out on students aged from 5 years old to 22 years old. It aims to investigate the differences in intelligibility between English, Mandarin and Cantonese of the classrooms in Hong Kong. The significance on speech transmission index (STI) related to Phonetically Balanced (PB) word scores will further be developed. Together with developed empirical relationship between the speech intelligibility in classrooms with the variations

  7. Influence of mandibular length on mouth opening

    NARCIS (Netherlands)

    Dijkstra, PU; Hof, AL; Stegenga, B; De Bont, LGM

    Theoretically, mouth opening not only reflects the mobility of the temporomandibular joints (TMJs) but also the mandibular length. Clinically, the exact relationship between mouth opening, mandibular length, and mobility of TMJs is unclear. To study this relationship 91 healthy subjects, 59 women

  8. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  9. Burning mouth syndrome: an enigmatic disorder.

    Science.gov (United States)

    Javali, M A

    2013-01-01

    Burning mouth syndrome (BMS) is a chronic oral pain or burning sensation affecting the oral mucosa, often unaccompanied by mucosal lesions or other evident clinical signs. It is observed principally in middle-aged patients and postmenopausal women and may be accompanied by xerostomia and altered taste. Burning mouth syndrome is characterized by an intense burning or stinging sensation, preferably on the tongue or in other areas of mouth. This disorder is one of the most common, encountered in the clinical practice. This condition is probably of multifactorial origin; however the exact underlying etiology remains uncertain. This article discusses several aspects of BMS, updates current knowledge about the etiopathogenesis and describes the clinical features as well as the diagnosis and management of BMS patients.

  10. Dermoid cyst in the mouth floor

    International Nuclear Information System (INIS)

    Portelles Masso, Ayelen Maria; Torres Inniguez, Ailin Tamara.

    2010-01-01

    The Dermoid cyst account for the 0.01 % of all cysts of buccal cavity. Its more frequent location is in the mouth floor. This is the case of a female patient aged 19 who approximately 7 years noted an increase of volume under tongue growing gradually and noting outside face and the discomfort at to speak and to chew. Complementary studies were conducted and under general anesthesia a surgical exeresis was carried out by intrabuccal approach achieving excellent esthetic and functional results. Histopathologic diagnosis matched with a dermoid cyst of mouth floor. Patient has not lesion recurrence after three years after operation. We conclude that the Dermoid cyst of mouth floor appear as benign tumor of middle line. The intrabuccal exeresis demonstrates esthetic and functional benefits. (author)

  11. FCJ-170 Challenging Hate Speech With Facebook Flarf: The Role of User Practices in Regulating Hate Speech on Facebook

    Directory of Open Access Journals (Sweden)

    Benjamin Abraham

    2014-12-01

    Full Text Available This article makes a case study of ‘flarfing’ (a creative Facebook user practice with roots in found-text poetry in order to contribute to an understanding of the potentials and limitations facing users of online social networking sites who wish to address the issue of online hate speech. The practice of ‘flarfing’ involves users posting ‘blue text’ hyperlinked Facebook page names into status updates and comment threads. Facebook flarf sends a visible, though often non-literal, message to offenders and onlookers about what kinds of speech the responding activist(s find (unacceptable in online discussion, belonging to a category of agonistic online activism that repurposes the tools of internet trolling for activist ends. I argue this practice represents users attempting to ‘take responsibility’ for the culture of online spaces they inhabit, promoting intolerance to hate speech online. Careful consideration of the limits of flarf's efficacy within Facebook’s specific regulatory environment shows the extent to which this practice and similar responses to online hate speech are constrained by the platforms on which they exist.

  12. Comparison of physical chewing measures to consumer typed Mouth Behavior.

    Science.gov (United States)

    Wilson, Arran; Jeltema, Melissa; Morgenstern, Marco P; Motoi, Lidia; Kim, Esther; Hedderley, Duncan

    2018-02-15

    The purpose of this study was to investigate the hypotheses that when presented with foods that could be chewed in different ways, (1) are participants jaw movements and chewing sequence measures correlated with Mouth Behavior (MB) group, as measured by the JBMB typing tool? (2) can MB group membership can be predicted from jaw movement and chewing sequence measures? One hundred subjects (69 female and 31 male, mean age 27 ± 7.7 years) were given four different foods (Mentos, Walkers, Cheetos Puffs, Twix) and video recordings of their jaw movements made. Twenty-nine parameters were calculated on each chewing sequence with 27 also calculated for the first half and second half of chewing sequence. Subjects were assigned to a MB group using the JBMB typing tool which gives four MB groups ("Chewers," "Crunchers," "Smooshers," and "Suckers"). The differences between individual chewing parameters and MB group were assessed with analysis of variance which showed only small differences in average chewing parameters between the MB groups. By using discriminant analysis, it was possible to partially discriminate between MB groups based on changes in their chewing parameters between foods with different material properties and stages of the chewing. A 19-variable model correctly predicted 68% of the subjects' membership of a MB group. This partially confirms our first hypothesis that when presented with foods that could be chewed in different ways participants will use a chewing sequence and jaw movements that correlate with their MB as measured by the JBMB typing tool. The way consumers chew their food has an impact on their texture perception of that food. While there is a wide range of chewing behaviors between consumers, they can be grouped into broad categories to better target both product design and product testing by sensory panel. In this study, consumers who were grouped on their texture preference (MB group) had jaw movements, when chewing a range of foods, which

  13. Differential recognition of pitch patterns in discrete and gliding stimuli in congenital amusia: evidence from Mandarin speakers.

    Science.gov (United States)

    Liu, Fang; Xu, Yi; Patel, Aniruddh D; Francart, Tom; Jiang, Cunmei

    2012-08-01

    This study examined whether "melodic contour deafness" (insensitivity to the direction of pitch movement) in congenital amusia is associated with specific types of pitch patterns (discrete versus gliding pitches) or stimulus types (speech syllables versus complex tones). Thresholds for identification of pitch direction were obtained using discrete or gliding pitches in the syllable /ma/ or its complex tone analog, from nineteen amusics and nineteen controls, all healthy university students with Mandarin Chinese as their native language. Amusics, unlike controls, had more difficulty recognizing pitch direction in discrete than in gliding pitches, for both speech and non-speech stimuli. Also, amusic thresholds were not significantly affected by stimulus types (speech versus non-speech), whereas controls showed lower thresholds for tones than for speech. These findings help explain why amusics have greater difficulty with discrete musical pitch perception than with speech perception, in which continuously changing pitch movements are prevalent. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Magneto encephalography (MEG: perspectives of speech areas functional mapping in human subjects

    Directory of Open Access Journals (Sweden)

    Butorina A. V.

    2012-06-01

    Full Text Available One of the main problems in clinical practice and academic research is how to localize speech zones in the human brain. Two speech areas (Broca and Wernicke areas that are responsible for language production and for understanding of written and spoken language have been known since the past century. Their location and even hemispheric lateralization have a substantial inter-individual variability, especially in neurosurgery patients. Wada test is one of the most frequently used invasive methodology for speech hemispheric lateralization in neurosurgery patients. However, besides relatively high-risk of Wada test for patient's health, it has its own limitation, e. g. low reliability of Wada-based evidence of verbal memory brain lateralization. Therefore, there is an urgent need for non-invasive, reliable methods of speech zones mapping.The current review summarizes the recent experimental evidence from magnitoencephalographic (MEG research suggesting that speech areas are included in the speech processing within the first 200 ms after the word onset. The electro-magnetic response to deviant word, mismatch negativity wave with latency of 100—200 ms, can be recorded from auditory cortex within the oddball-paradigm. We provide the arguments that basic features of this brain response, such as its automatic, pre-attentive nature, high signal to noise ratio, source localization at superior temporal sulcus, make it a promising vehicle for non-invasive MEG-based speech areas mapping in neurosurgery.

  15. Start/End Delays of Voiced and Unvoiced Speech Signals

    Energy Technology Data Exchange (ETDEWEB)

    Herrnstein, A

    1999-09-24

    Recent experiments using low power EM-radar like sensors (e.g, GEMs) have demonstrated a new method for measuring vocal fold activity and the onset times of voiced speech, as vocal fold contact begins to take place. Similarly the end time of a voiced speech segment can be measured. Secondly it appears that in most normal uses of American English speech, unvoiced-speech segments directly precede or directly follow voiced-speech segments. For many applications, it is useful to know typical duration times of these unvoiced speech segments. A corpus, assembled earlier of spoken ''Timit'' words, phrases, and sentences and recorded using simultaneously measured acoustic and EM-sensor glottal signals, from 16 male speakers, was used for this study. By inspecting the onset (or end) of unvoiced speech, using the acoustic signal, and the onset (or end) of voiced speech using the EM sensor signal, the average duration times for unvoiced segments preceding onset of vocalization were found to be 300ms, and for following segments, 500ms. An unvoiced speech period is then defined in time, first by using the onset of the EM-sensed glottal signal, as the onset-time marker for the voiced speech segment and end marker for the unvoiced segment. Then, by subtracting 300ms from the onset time mark of voicing, the unvoiced speech segment start time is found. Similarly, the times for a following unvoiced speech segment can be found. While data of this nature have proven to be useful for work in our laboratory, a great deal of additional work remains to validate such data for use with general populations of users. These procedures have been useful for applying optimal processing algorithms over time segments of unvoiced, voiced, and non-speech acoustic signals. For example, these data appear to be of use in speaker validation, in vocoding, and in denoising algorithms.

  16. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  17. Technology assisted speech and language therapy.

    Science.gov (United States)

    Glykas, Michael; Chytas, Panagiotis

    2004-06-30

    Speech and language therapists (SLTs) are faced daily with a diversity of speech and language disabilities, which are associated with a variety of conditions ranging from client groups with overall cognitive deficits to those with more specific difficulties. It is desirable that those working with such a range of problems and with such a demanding workload, plan care efficiently. Therefore, the introduction of methodologies, reference models of work and tools, which significantly improve the effectiveness of therapy, are particularly welcome. This paper describes the first web-based tool for diagnosis, treatment and e-Learning in the field of language and speech therapy. The system allows SLTs to find the optimum treatment for each patient, it also allows any non-specialist user-SLT, patient or helper (relative etc.)-to explore their creativity, by designing their own communication aid in an interactive manner, with the use of editors such as: configuration and vocabulary. The system has been tested and piloted by potential users in Greece and the UK.

  18. The Effect of Onset Asynchrony in Audio Visual Speech and the Uncanny Valley in Virtual Characters

    DEFF Research Database (Denmark)

    Tinwell, Angela; Grimshaw, Mark; Abdel Nabi, Deborah

    2015-01-01

    This study investigates if the Uncanny Valley phenomenon is increased for realistic, human-like characters with an asynchrony of lip movement during speech. An experiment was conducted in which 113 participants rated, a human and a realistic, talking-head, human-like, virtual character over a ran...

  19. Vocal Performance and Speech Intonation: Bob Dylan’s “Like a Rolling Stone”

    Directory of Open Access Journals (Sweden)

    Michael Daley

    2007-03-01

    Full Text Available This article proposes a linguistic analysis of a recorded performance of a single verse of one of Dylan’s most popular songs—the originally released studio recording of “Like A Rolling Stone”—and describes more specifically the ways in which intonation relates to lyrics and performance. This analysis is used as source material for a close reading of the semantic, affective, and “playful” meanings of the performance, and is compared with some published accounts of the song’s reception. The author has drawn on the linguistic methodology formulated by Michael Halliday, who has found speech intonation (which includes pitch movement, timbre, syllabic rhythm, and loudness to be an integral part of English grammar and crucial to the transmission of certain kinds of meaning. Speech intonation is a deeply-rooted and powerfully meaningful aspect of human communication. This article argues that is plausible that a system so powerful in speech might have some bearing on the communication of meaning in sung performance.

  20. WORD OF MOUTH SEBAGAI KONSEKUENSI KEPUASAN PELANGGAN

    Directory of Open Access Journals (Sweden)

    Eny Purbandari

    2018-03-01

    Full Text Available The objective of this study is to investigate the impact of price and service quality on customer satisfaction to increase words of mouth. Data were collected by distributes questionnaires to 110 patient of Bhayangkara Polda DIY Hospital. Then, data was analyzed using structural equation modeling. The result showed that service quality, price and image have positive effect on patient satisfaction and patient satisfaction has a positive effect on words of mouth. The results also shows that image have the highest effect in creating the satisfaction. Therefore, the models of words of mouth have acceptable.

  1. Mouth cancer in inflammatory bowel diseases.

    Science.gov (United States)

    Giagkou, E; Christodoulou, D K; Katsanos, K H

    2016-05-01

    Mouth cancer is a major health problem. Multiple risk factors for developing mouth cancer have been studied and include history of tobacco and alcohol abuse, age over 40, exposure to ultraviolet radiation, human papilloma virus infection (HPV), nutritional deficiencies, chronic irritation, and existence or oral potentially malignant lesions such as leukoplakia and lichen planus. An important risk factor for mouth cancer is chronic immunosuppression and has been extensively reported after solid organ transplantation as well as HIV-infected patients. Diagnosis of inflammatory bowel disease (IBD) is not yet considered as a risk factor for oral cancer development. However, a significant number of patients with IBD are receiving immunosuppressants and biological therapies which could represent potential oral oncogenic factors either by direct oncogenic effect or by continuous immunosuppression favoring carcinogenesis, especially in patients with HPV(+) IBD. Education on modifiable risk behaviors in patients with IBD is the cornerstone of prevention of mouth cancer. Oral screening should be performed for all patients with IBD, especially those who are about to start an immunosuppressant or a biologic. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  3. Processing melodic contour and speech intonation in congenital amusics with Mandarin Chinese.

    Science.gov (United States)

    Jiang, Cunmei; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J; Yang, Yufang

    2010-07-01

    Congenital amusia is a disorder in the perception and production of musical pitch. It has been suggested that early exposure to a tonal language may compensate for the pitch disorder (Peretz, 2008). If so, it is reasonable to expect that there would be different characterizations of pitch perception in music and speech in congenital amusics who speak a tonal language, such as Mandarin. In this study, a group of 11 adults with amusia whose first language was Mandarin were tested with melodic contour and speech intonation discrimination and identification tasks. The participants with amusia were impaired in discriminating and identifying melodic contour. These abnormalities were also detected in identifying both speech and non-linguistic analogue derived patterns for the Mandarin intonation tasks. In addition, there was an overall trend for the participants with amusia to show deficits with respect to controls in the intonation discrimination tasks for both speech and non-linguistic analogues. These findings suggest that the amusics' melodic pitch deficits may extend to the perception of speech, and could potentially result in some language deficits in those who speak a tonal language. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  4. Nuclear movement regulated by non-Smad Nodal signaling via JNK is associated with Smad signaling during zebrafish endoderm specification.

    Science.gov (United States)

    Hozumi, Shunya; Aoki, Shun; Kikuchi, Yutaka

    2017-11-01

    Asymmetric nuclear positioning is observed during animal development, but its regulation and significance in cell differentiation remain poorly understood. Using zebrafish blastulae, we provide evidence that nuclear movement towards the yolk syncytial layer, which comprises extraembryonic tissue, occurs in the first cells fated to differentiate into the endoderm. Nodal signaling is essential for nuclear movement, whereas nuclear envelope proteins are involved in movement through microtubule formation. Positioning of the microtubule-organizing center, which is proposed to be crucial for nuclear movement, is regulated by Nodal signaling and nuclear envelope proteins. The non-Smad JNK signaling pathway, which is downstream of Nodal signaling, regulates nuclear movement independently of the Smad pathway, and this nuclear movement is associated with Smad signal transduction toward the nucleus. Our study provides insight into the function of nuclear movement in Smad signaling toward the nucleus, and could be applied to the control of TGFβ signaling. © 2017. Published by The Company of Biologists Ltd.

  5. Phonological processes in the speech of school-age children with hearing loss: Comparisons with children with normal hearing.

    Science.gov (United States)

    Asad, Areej Nimer; Purdy, Suzanne C; Ballard, Elaine; Fairgray, Liz; Bowen, Caroline

    2018-04-27

    In this descriptive study, phonological processes were examined in the speech of children aged 5;0-7;6 (years; months) with mild to profound hearing loss using hearing aids (HAs) and cochlear implants (CIs), in comparison to their peers. A second aim was to compare phonological processes of HA and CI users. Children with hearing loss (CWHL, N = 25) were compared to children with normal hearing (CWNH, N = 30) with similar age, gender, linguistic, and socioeconomic backgrounds. Speech samples obtained from a list of 88 words, derived from three standardized speech tests, were analyzed using the CASALA (Computer Aided Speech and Language Analysis) program to evaluate participants' phonological systems, based on lax (a process appeared at least twice in the speech of at least two children) and strict (a process appeared at least five times in the speech of at least two children) counting criteria. Developmental phonological processes were eliminated in the speech of younger and older CWNH while eleven developmental phonological processes persisted in the speech of both age groups of CWHL. CWHL showed a similar trend of age of elimination to CWNH, but at a slower rate. Children with HAs and CIs produced similar phonological processes. Final consonant deletion, weak syllable deletion, backing, and glottal replacement were present in the speech of HA users, affecting their overall speech intelligibility. Developmental and non-developmental phonological processes persist in the speech of children with mild to profound hearing loss compared to their peers with typical hearing. The findings indicate that it is important for clinicians to consider phonological assessment in pre-school CWHL and the use of evidence-based speech therapy in order to reduce non-developmental and non-age-appropriate developmental processes, thereby enhancing their speech intelligibility. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Speech disorders - children

    Science.gov (United States)

    ... disorder; Voice disorders; Vocal disorders; Disfluency; Communication disorder - speech disorder; Speech disorder - stuttering ... evaluation tools that can help identify and diagnose speech disorders: Denver Articulation Screening Examination Goldman-Fristoe Test of ...

  7. Differential replication of foot-and-mouth disease viruses in mice determine lethality

    Science.gov (United States)

    Adult C57BL/6J mice have been used to study foot-and-mouth disease virus (FMDV) biology. In this work, two variants of an FMDV A/Arg/01 strain exhibiting differential pathogenicity in adult mice were identified and characterized: a non-lethal virus (A01NL) caused mild signs of disease, whereas a let...

  8. Telling stories: opportunities for word-of-mouth communication.

    OpenAIRE

    Cownie, Fiona

    2017-01-01

    Word-of-mouth is an important aspect of marketing communications and can be conceived as the story-telling of everyday life. This working paper suggests that marketing communicators’ understanding of word-of-mouth might usefully be enhanced by the consideration of the tools of the screenwriter, in particular the premise and the active question. The jeopardy of the premise and unresolved nature of the active questions the premise generates may contribute to the potency of word-of-mouth message...

  9. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  10. Imaging for understanding speech communication: Advances and challenges

    Science.gov (United States)

    Narayanan, Shrikanth

    2005-04-01

    Research in speech communication has relied on a variety of instrumentation methods to illuminate details of speech production and perception. One longstanding challenge has been the ability to examine real-time changes in the shaping of the vocal tract; a goal that has been furthered by imaging techniques such as ultrasound, movement tracking, and magnetic resonance imaging. The spatial and temporal resolution afforded by these techniques, however, has limited the scope of the investigations that could be carried out. In this talk, we focus on some recent advances in magnetic resonance imaging that allow us to perform near real-time investigations on the dynamics of vocal tract shaping during speech. Examples include Demolin et al. (2000) (4-5 images/second, ultra-fast turbo spin echo) and Mady et al. (2001,2002) (8 images/second, T1 fast gradient echo). A recent study by Narayanan et al. (2004) that used a spiral readout scheme to accelerate image acquisition has allowed for image reconstruction rates of 24 images/second. While these developments offer exciting prospects, a number of challenges lie ahead, including: (1) improving image acquisition protocols, hardware for enhancing signal-to-noise ratio, and optimizing spatial sampling; (2) acquiring quality synchronized audio; and (3) analyzing and modeling image data including cross-modality registration. [Work supported by NIH and NSF.

  11. The Influence of Psycholinguistic Variables on Articulatory Errors in Naming in Progressive Motor Speech Degeneration

    Science.gov (United States)

    Code, Chris; Tree, Jeremy; Ball, Martin

    2011-01-01

    We describe an analysis of speech errors on a confrontation naming task in a man with progressive speech degeneration of 10-year duration from Pick's disease. C.S. had a progressive non-fluent aphasia together with a motor speech impairment and early assessment indicated some naming impairments. There was also an absence of significant…

  12. Restricted Mandibular Movement Attributed to Ossification of Mandibular Depressors and Medial Pterygoid Muscles in Patients With Fibrodysplasia Ossificans Progressiva: A Report of 3 Cases.

    Science.gov (United States)

    Okuno, Tetsuko; Suzuki, Hitoshi; Inoue, Akio; Kusukawa, Jingo

    2017-09-01

    Fibrodysplasia ossificans progressiva (FOP) is an extremely rare genetic condition characterized by congenital malformation and progressive heterotopic ossification (HO) caused by a recurrent single nucleotide substitution at position 617 in the ACVR1 gene. As the condition progresses, HO leads to joint ankylosis, breathing difficulties, and mouth-opening restriction, and it can shorten the patient's lifespan. This report describes 3 cases of FOP confirmed by genetic testing in patients with restricted mouth opening. Each patient presented a different onset and degree of jaw movement restriction. The anatomic ossification site of the mandibular joint was examined in each patient using reconstructed computed tomographic (CT) images and 3-dimensional reconstructed CT (3D-CT) images. A 29-year-old woman complained of jaw movement restriction since 13 years of age. 3D-CT image of the mandibular joint showed an osseous bridge, formed by the mandibular depressors that open the mouth, between the hyoid bone and the mentum of the mandible. A 39-year-old man presented with jaw movement restriction that developed at 3 years of age after a mouth injury. 3D-CT image of the jaw showed ankylosis of the jaw from ossification of the mandibular depressors that was worse than in patient 1. CT images showed no HO findings of the masticatory muscles. To the authors' knowledge, these are the first 2 case descriptions of the anatomic site of ankylosis involving HO of the mandibular depressors in the jaw resulting from FOP. In contrast, a 62-year-old bedridden woman with an interincisal distance longer than 10 mm (onset, 39 years of age) had no HO of the mandibular depressors and slight HO of the medial pterygoid muscle on the right and left sides. These findings suggest that restricted mouth opening varies according to the presence or absence of HO of the mandibular depressors. Copyright © 2017. Published by Elsevier Inc.

  13. The attention-getting capacity of whines and child-directed speech.

    Science.gov (United States)

    Chang, Rosemarie Sokol; Thompson, Nicholas S

    2010-06-03

    The current study tested the ability of whines and child-directed speech to attract the attention of listeners involved in a story repetition task. Twenty non-parents and 17 parents were presented with two dull stories, each playing to a separate ear, and asked to repeat one of the stories verbatim. The story that participants were instructed to ignore was interrupted occasionally with the reader whining and using child-directed speech. While repeating the passage, participants were monitored for Galvanic skin response, heart rate, and blood pressure. Based on 4 measures, participants tuned in more to whining, and to a lesser extent child-directed speech, than neutral speech segments that served as a control. Participants, regardless of gender or parental status, made more mistakes when presented with the whine or child-directed speech, they recalled hearing those vocalizations, they recognized more words from the whining segment than the neutral control segment, and they exhibited higher Galvanic skin response during the presence of whines and child- directed speech than neutral speech segments. Whines and child-directed speech appear to be integral members of a suite of vocalizations designed to get the attention of attachment partners by playing to an auditory sensitivity among humans. Whines in particular may serve the function of eliciting care at a time when caregivers switch from primarily mothers to greater care from other caregivers.

  14. The Attention-Getting Capacity of Whines and Child-Directed Speech

    Directory of Open Access Journals (Sweden)

    Rosemarie Sokol Chang

    2010-04-01

    Full Text Available The current study tested the ability of whines and child-directed speech to attract the attention of listeners involved in a story repetition task. Twenty non-parents and 17 parents were presented with two dull stories, each playing to a separate ear, and asked to repeat one of the stories verbatim. The story that participants were instructed to ignore was interrupted occasionally with the reader whining and using child-directed speech. While repeating the passage, participants were monitored for Galvanic skin response, heart rate, and blood pressure. Based on 4 measures, participants tuned in more to whining, and to a lesser extent child-directed speech, than neutral speech segments that served as a control. Participants, regardless of gender or parental status, made more mistakes when presented with the whine or child-directed speech, they recalled hearing those vocalizations, they recognized more words from the whining segment than the neutral control segment, and they exhibited higher Galvanic skin response during the presence of whines and child-directed speech than neutral speech segments. Whines and child-directed speech appear to be integral members of a suite of vocalizations designed to get the attention of attachment partners by playing to an auditory sensitivity among humans. Whines in particular may serve the function of eliciting care at a time when caregivers switch from primarily mothers to greater care from other caregivers.

  15. The Galker test of speech reception in noise

    DEFF Research Database (Denmark)

    Lauritsen, Maj-Britt Glenn; Söderström, Margareta; Kreiner, Svend

    2016-01-01

    PURPOSE: We tested "the Galker test", a speech reception in noise test developed for primary care for Danish preschool children, to explore if the children's ability to hear and understand speech was associated with gender, age, middle ear status, and the level of background noise. METHODS......: The Galker test is a 35-item audio-visual, computerized word discrimination test in background noise. Included were 370 normally developed children attending day care center. The children were examined with the Galker test, tympanometry, audiometry, and the Reynell test of verbal comprehension. Parents...... and daycare teachers completed questionnaires on the children's ability to hear and understand speech. As most of the variables were not assessed using interval scales, non-parametric statistics (Goodman-Kruskal's gamma) were used for analyzing associations with the Galker test score. For comparisons...

  16. An exploratory study on the driving method of speech synthesis based on the human eye reading imaging data

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2016-10-01

    With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.

  17. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  18. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  19. Neurophysiological Evidence That Musical Training Influences the Recruitment of Right Hemispheric Homologues for Speech Perception

    Directory of Open Access Journals (Sweden)

    McNeel Gordon Jantzen

    2014-03-01

    Full Text Available Musicians have a more accurate temporal and tonal representation of auditory stimuli than their non-musician counterparts (Kraus & Chandrasekaran, 2010; Parbery-Clark, Skoe, & Kraus, 2009; Zendel & Alain, 2008; Musacchia, Sams, Skoe, & Kraus, 2007. Musicians who are adept at the production and perception of music are also more sensitive to key acoustic features of speech such as voice onset timing and pitch. Together, these data suggest that musical training may enhance the processing of acoustic information for speech sounds. In the current study, we sought to provide neural evidence that musicians process speech and music in a similar way. We hypothesized that for musicians, right hemisphere areas traditionally associated with music are also engaged for the processing of speech sounds. In contrast we predicted that in non-musicians processing of speech sounds would be localized to traditional left hemisphere language areas. Speech stimuli differing in voice onset time was presented using a dichotic listening paradigm. Subjects either indicated aural location for a specified speech sound or identified a specific speech sound from a directed aural location. Musical training effects and organization of acoustic features were reflected by activity in source generators of the P50. This included greater activation of right middle temporal gyrus (MTG and superior temporal gyrus (STG in musicians. The findings demonstrate recruitment of right hemisphere in musicians for discriminating speech sounds and a putative broadening of their language network. Musicians appear to have an increased sensitivity to acoustic features and enhanced selective attention to temporal features of speech that is facilitated by musical training and supported, in part, by right hemisphere homologues of established speech processing regions of the brain.

  20. Perceptual effects of noise reduction by time-frequency masking of noisy speech.

    Science.gov (United States)

    Brons, Inge; Houben, Rolph; Dreschler, Wouter A

    2012-10-01

    Time-frequency masking is a method for noise reduction that is based on the time-frequency representation of a speech in noise signal. Depending on the estimated signal-to-noise ratio (SNR), each time-frequency unit is either attenuated or not. A special type of a time-frequency mask is the ideal binary mask (IBM), which has access to the real SNR (ideal). The IBM either retains or removes each time-frequency unit (binary mask). The IBM provides large improvements in speech intelligibility and is a valuable tool for investigating how different factors influence intelligibility. This study extends the standard outcome measure (speech intelligibility) with additional perceptual measures relevant for noise reduction: listening effort, noise annoyance, speech naturalness, and overall preference. Four types of time-frequency masking were evaluated: the original IBM, a tempered version of the IBM (called ITM) which applies limited and non-binary attenuation, and non-ideal masking (also tempered) with two different types of noise-estimation algorithms. The results from ideal masking imply that there is a trade-off between intelligibility and sound quality, which depends on the attenuation strength. Additionally, the results for non-ideal masking suggest that subjective measures can show effects of noise reduction even if noise reduction does not lead to differences in intelligibility.